Crafting AI Personas with System Prompts & CoT: Building Hitesh Choudhary’s Digital Twin

Welcome to the Hitesh Choudhary sir’s AI Persona Bot — a terminal-based chatbot powered by OpenAI's GPT-4o-mini, designed to simulate conversations in Hitesh sir's tone, expertise, and teaching style.
In this blog, I'll walk you through the entire setup, how it works under the hood, why .env
is used, and how previous chats are cached to maintain contextual conversations.
The secret sauce behind how an AI sounds, responds, thinks, and even feels. In this blog, I’ll walk you through a real-life example of a powerful system prompt designed to make an AI respond like Hitesh Choudhary, a beloved educator and mentor in the Indian dev community.
This blog dives deep into two parts:
Understanding System Prompts: Crafting Custom AI Personas
Building and running a terminal-based chatbot with contextual memory using Python.
1.🧠 Understanding System Prompts: Crafting Custom AI Personas
One of the most powerful tools when building with OpenAI’s API is the system prompt — a special instruction sent at the start of the chat to shape the behavior of the AI.
🔍 What is a System Prompt?
A system prompt is a special instruction given to the AI before any conversation begins. It sets the ground rules for how the AI should behave throughout the session.
Think of it as an invisible teacher whispering into the AI’s ears: "Bhai, tu ab Hitesh Choudhary ban gaya hai. Ab ussi style mein baat karega." 😄
It controls: Tone, Personality, Language (like Hinglish), Response structure (like JSON),Level of empathy, humor, or storytelling
SYSTEM_PROMPT = """
You are Hitesh Choudhary – a tech educator, mentor, and software engineer known for your practical teaching style...
"""
👤 Defining a Persona (Why it matters)
In your case, the AI is instructed to take on the persona of Hitesh Choudhary.
"You are an AI persona of Hitesh Choudhary"
Why is this important?
It makes responses relatable, especially for a specific audience.
Persona brings consistency in voice — like Hinglish + empathy + seniority.
It adds emotional intelligence: instead of dry answers, you get “mann ki baat”.
🧠 Chain of Thought: Step-by-Step Thinking
This part is a game-changer. The prompt instructs the AI to think like a human mentor, using a logical sequence:
Sochta hoon ki user kis phase mein hai? (Thinking)
Analyze karte hain, kya samasya hai? (Analyzing)
Apne experience se relate karta hoon. (Validating)
Suggestion deta hoon – realistic, emotional, actionable. (Suggesting)
This is called Chain-of-Thought prompting — a method to guide the AI to not just give answers, but explain its thinking process step-by-step.
🧩 It results in deeper, more human responses that build trust.
🔄 Hinglish Rules & Script Control
A unique rule in our prompt is:
"Convert all Hindi (Devanagari) to Hinglish using English alphabets."
This ensures:
Accessibility: not everyone can read देवनागरी.
Personality: Hitesh sir’s audience talks in Hinglish, not formal Hindi or English.
Strictly enforcing this makes the output feel authentic and culturally tuned.
📦 JSON-Only Format (Why it matters)
Every AI response must be wrapped like this:
{
"content": "<your answer here>"
}
Why enforce JSON?
For programmatic readability (frontends can easily extract
content
)To standardize responses for structured apps
To prevent hallucinated markdown, HTML, or styling
It’s a developer-friendly constraint that forces clarity and control.
📚 The Role of Background & Proprietary Info
The prompt includes a rich backstory of the persona:
"Retired corporate professional... CTO of iNeuron... now full-time YouTuber."
Why include this?
It allows the AI to relate with real-life context.
Makes the advice more grounded: “main bhi uss phase se guzra hoon” feels real.
Adds credibility to every suggestion the AI gives.
In LLM terms, this is called injecting proprietary context or persona grounding.
📖 Why Examples Matter (Always)
The prompt says:
"Add simple / easy examples (for better approachability)."
This improves:
Comprehension: concepts become tangible
Confidence: users feel like “haan, samajh aa gaya”
Trust: you’re not just throwing jargon; you're teaching
Every good educator — human or AI — relies on examples. Your system prompt bakes it in.
2. 🚀Building and running a terminal-based chatbot with contextual memory using Python.
Let’s break down how we created the chatbot and how you can run it yourself.
gpt-persona-bot/
├── main.py # Entry point – chat loop logic
├── prompt.py # System prompt stored here
├── requirements.txt # Python dependencies
└── .env # Environment variables (e.g., OpenAI API key)
✅ Step 1: Setting Up Environment Variables
First, install python-dotenv
and store your OpenAI API key in a .env
file:
.env
OPENAI_API_KEY=youApiKeyHere
This keeps sensitive keys out of your codebase.
✅ Step 2: Installing Dependencies
Install the required Python packages:
requirements.txt
openai
python-dotenv
Run:
pip install -r requirements.txt
✅ Step 3: Creating the System Prompt
Inside prompt.py
, define your custom persona like this:
prompt.py
SYSTEM_PROMPT = """
You are Hitesh Choudhary – a tech educator and software engineer known for your real-world advice, no-fluff approach, and deep love for teaching programming. You answer like a mentor, always encouraging learners, and simplify complex topics using analogies and examples.
You love saying: "Code kar lo bhai!", "Aage badho", "Yeh samajhna zaroori hai".
Stick to Hindi-English mix, be crisp, helpful, and motivational. Make learning fun!
"""
✅ Step 4: Writing the Chatbot Logic
Now the core chatbot logic in main.py
:
main.py
from dotenv import load_dotenv
from openai import OpenAI
from prompt import SYSTEM_PROMPT
import json
load_dotenv()
client = OpenAI()
messages = [
{"role": 'system', "content": SYSTEM_PROMPT}
]
while True:
query = input("> ")
messages.append({"role": "user", "content": query})
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
assistant_reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_reply})
try:
parsed = json.loads(assistant_reply)
print("Hitesh Choudhary:", parsed.get("content"))
except:
print("Hitesh Choudhary:", assistant_reply)
🔄 How the Bot Maintains Context
The most powerful part of this bot is how it remembers previous messages. We do that using a messages
list that includes:
The system prompt
Every user input
Every assistant response
🧠 Why Context Matters
This method allows the bot to:
Remember what you just asked
Refer back to earlier questions
Maintain a continuous conversation
Improve coherence over time
Without it, every input would be treated like a brand-new, standalone question.
🔐 Why We Use .env
and load_dotenv
We avoid hardcoding API keys directly in the script for security. Using python-dotenv
:
from dotenv import load_dotenv
load_dotenv()
...loads the .env
values into your environment at runtime. This keeps your API key safe, especially when pushing code to GitHub.
🎯 How to Run the Bot
git clone https://github.com/yourusername/gpt-persona-bot
cd gpt-persona-bot
pip install -r requirements.txt
python main.py
Then start chatting like this:
> Hitesh sir, React kaise sikhein?
Hitesh Choudhary: "Code kar lo bhai! React seekhne ke liye pehle JavaScript strong honi chahiye..."
📌 Final Thoughts
Creating a chatbot that sounds like your mentor is a blend of prompt engineering and smart caching of conversation history. With OpenAI and Python, it's now easier than ever to bring custom personas to life.
Whether you’re building this for fun, for a course, or as a stepping stone to full AI apps — this project teaches you the foundations of context-aware conversational AI.
Subscribe to my newsletter
Read articles from Vidya Sagar Mehar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
