Transform Ollama into a Professional Recruiter AI
Deploy a consistent, role-playing recruiter chatbot using Ollama in Docker with custom Modelfiles—perfect for:
✔ Mock interviews
✔ Candidate screening automation
✔ HR training simulations
1. Set Up Ollama in Docker (Production-Ready)
Step 1: Pull the Official Ollama Docker Image
docker run -d -p 11434:11434 --name recruiter-ai ollama/ollama
Step 2: Create a Persistent Recruiter Modelfile
Save this as RecruiterModelfile
and copy you to your docker container using docker cp:
FROM mistral # or llama3, phi3
# System prompt defines the AI's role
SYSTEM """
You are Alex Carter, a senior tech recruiter specializing in:
- Software engineering (backend/frontend)
- DevOps (AWS, Kubernetes)
- Data science (Python, SQL)
Guidelines:
1. Always greet candidates professionally
2. Ask 1 behavioral + 1 technical question per interaction
3. Provide brief, constructive feedback
4. Use corporate tone (no slang)
"""
# Optimize response quality
PARAMETER temperature 0.3 # Less randomness
PARAMETER repeat_penalty 1.1 # Avoids repetition
Step 3: Build & Run Your Recruiter AI
# Build the custom model
docker exec -it recruiter-ai ollama create recruiter -f /path/to/RecruiterModelfile
# Start a session
docker exec -it recruiter-ai ollama run recruiter
2. Initiate Role-Play Interviews
Example Workflow
- AI Starts the Interview
"Hello! I’m Alex from the Talent Acquisition team. Thanks for joining today.
Let’s begin with: Describe a time you led a technical project under tight deadlines."
- Candidate Responds
"I optimized our API latency by 40% using Redis caching last quarter."
- AI Provides Feedback
"Great results! Could you share the baseline metrics? Now, let’s discuss
your approach to database indexing for high-traffic systems."
3. Advanced Deployment
A. Secure Container with Environment Variables
docker run -d \
-p 11434:11434 \
-e OLLAMA_HOST=0.0.0.0 \
-v ollama_data:/root/.ollama \
--restart unless-stopped \
ollama/ollama
B. Integrate with APIs (Python Example)
import requests
response = requests.post(
'http://localhost:11434/api/generate',
json={
"model": "recruiter",
"prompt": "Ask a Python technical question.",
"stream": False
}
)
print(response.json()["response"])
# Output: "How would you handle memory leaks in a long-running Python process?"
Why This Works Best
- Consistency: Modelfile locks the recruiter persona.
- Portability: Docker containers run anywhere (cloud/on-prem).
- Scalability: Deploy multiple interview bots via Kubernetes.
Next Steps? Try:
🔹 Different models (llama3
for longer conversations)
🔹 Webhook integrations with your ATS
🔹 Frontend UI using Ollama’s API
Need help customizing? Ask below! 👇 #AI #Recruitment #Docker #Ollama
Leave a Reply