A Persona Based Chatbot

How I Built a Persona AI Chatbot Inspired by Hitesh Choudhary
Hey there! In this article, I’ll walk you through how I created a Persona AI Chatbot that mimics the style and tone of Hitesh Choudhary, a popular YouTuber with channels like Chai aur Code and Hitesh Choudhary. This project was super fun, and I used some cool tools like Gemini 1.5 Flash, FastAPI, Render, Loveable for the UI, and Vercel for hosting the frontend. Let’s dive into the process, bilkul Hitesh-style!
The Idea Behind the Chatbot
Hitesh Choudhary is known for his friendly, approachable, and engaging teaching style. He often uses phrases like “Haanjii,” “Bilkul,” and “Chai peete rahe!” to connect with his audience. I wanted to create a chatbot that feels like you’re chatting with Hitesh himself—someone who gives coding advice, shares life tips, and keeps the vibe chill and fun.
To make this happen, I used few-shot prompting with the Gemini 1.5 Flash model to give the chatbot Hitesh’s personality. I also built a backend with FastAPI, hosted it on Render, and created a frontend with Loveable that’s hosted on Vercel. You can check out the live demo here:
https://persona-ai-lake.vercel.app/
Step 1: Setting Up the AI with Gemini 1.5 Flash
The heart of the chatbot is Gemini 1.5 Flash, a fast and efficient AI model by Google. I chose it because it’s great for generating human-like responses and can handle custom prompts well. To make the chatbot act like Hitesh, I used few-shot prompting, which means giving the AI a few examples of how Hitesh talks and behaves.
What is Few-Shot Prompting?
Few-shot prompting is like teaching the AI by example. Instead of just telling it to “act like Hitesh,” I provided a detailed system prompt with instructions and examples of Hitesh’s tone, phrases, and responses. For example, I included phrases like “Haanjii,” “Bilkul,” and “Chai peete rahe!” and showed how Hitesh might respond to coding questions or life advice queries. This helps the AI understand the vibe and mimic it accurately.
Here’s a simplified version of what I did in the system prompt:
Defined Hitesh’s persona: A software engineer and YouTuber who runs “Chai aur Code” and “Hitesh Choudhary” channels.
Set the tone: Friendly, casual, and approachable, like talking to a friend.
Added signature phrases: “Haanjii,” “Bilkul,” “Chai peete rahe,” “Kya haal chal?”
Gave example responses: For coding questions, I showed how Hitesh might suggest checking his YouTube channels. For life advice, I included examples of empathetic and practical responses.
Language handling: If the user types in Hindi, English, or a mix, the chatbot responds in the same language.
This system prompt was fed to Gemini 1.5 Flash to make the chatbot sound authentic.
Step 2: Building the Backend with FastAPI
For the backend, I used FastAPI, a Python framework that’s fast and easy to use for creating APIs. The backend handles the communication between the frontend and the Gemini model. Here’s the code I wrote for the backend:
from dotenv import load_dotenv
import os import google.generativeai as genai
from fastapi import FastAPI, Request
from pydantic import BaseModel
from fastapi.middleware.cors import CORSMiddleware
load_dotenv()
api_key = os.getenv("GEMINI_KEY")
genai.configure(api_key=api_key)
model = genai.GenerativeModel("gemini-1.5-flash")
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=[
"http://localhost:3000",
"*",
],
allow_credentials=True,
allow_methods=[""],
allow_headers=[""],
)
class ChatRequest(BaseModel):
message: str
SYSTEM_PROMPT = """
Assume you are Hitesh Choudhary, a software engineer and educator.
You have youtube channels "Code aur code" and "Hitesh Choudhary".
You are an expert in Python, JavaScript, React, Node.js, and other web technologies.
Respond in a calm and friendly manner, as if you are talking to a friend.
the tone should be casual and approachable.
you uses common phrases like "Hey there!", "Sure thing!", "Absolutely!", and "No problem!" in english
and in hindi "Bilkul" , "Haannjii" , "chai peete rahe" , "kaise hai aap sab!", kya haal chal.
...
<!-- Full system prompt as provided in the user input -->
"""
chat_history = [{"role": "user", "parts": [SYSTEM_PROMPT]}]
@app.post("/chat")
async def chat_with_model(req: ChatRequest):
user_input = req.message.strip()
chat_history.append({"role": "user", "parts": [user_input]})
try:
response = model.generate_content(chat_history)
chat_history.append({"role": "model", "parts": [response.text]})
return {"response": response.text}
except Exception as e:
return {"error": str(e)}
What’s Happening in the Code?
Loading environment variables: I used
python-dotenv
to securely load the Gemini API key from a.env
file.Setting up Gemini: The
google.generativeai
library connects to the Gemini 1.5 Flash model.FastAPI setup: I created a FastAPI app with a
/chat
endpoint that takes a user message and sends it to Gemini.CORS middleware: This allows the frontend (hosted on Vercel) to communicate with the backend (hosted on Render).
Chat history: I stored the conversation history to maintain context for the AI.
I also added rate limiting to prevent the API from being overloaded. This was done using FastAPI’s built-in features to limit requests per minute.
Step 3: Hosting the Backend on Render
I hosted the FastAPI backend on Render, a platform that makes deploying Python apps super easy. Here’s how I did it:
Pushed the code to a GitHub repository.
Connected the repository to Render and set it up as a web service.
Added the Gemini API key to Render’s environment variables.
Deployed the app, and Render gave me a URL for the backend API.
Render’s free tier was enough for this project, and it handles scaling automatically.
Step 4: Creating the Frontend with Loveable
For the frontend, I used Loveable, a tool that helps create beautiful and responsive UI components. I designed a clean, user-friendly chat interface where users can type messages and see the chatbot’s responses. The UI has:
A text input for user messages.
A chat window to display the conversation.
A “Send” button styled to match Hitesh’s vibrant and friendly vibe.
I manually integrated the FastAPI backend with the frontend using JavaScript and the fetch
API to send user messages to the /chat
endpoint and display the responses.
Step 5: Hosting the Frontend on Vercel
I hosted the frontend on Vercel, which is perfect for static sites and frontends. Here’s what I did:
Pushed the frontend code to a GitHub repository.
Connected the repository to Vercel and deployed it.
Set up the custom domain: https://persona-ai-lake.vercel.app/.
Vercel’s automatic scaling and easy deployment made this step a breeze.
Step 6: Testing
After deploying, I tested the chatbot to make sure it captured Hitesh’s tone. For example:
I tweaked the system prompt a few times to make the responses more natural and Hitesh-like.
Challenges and Learnings
Prompt engineering: Getting the few-shot prompting right took some trial and error. I had to balance Hitesh’s tone with clear instructions for the AI.
API integration: Connecting the frontend and backend required careful handling of CORS and error responses.
Rate limiting: I added rate limiting to prevent abuse, which was a new concept for me but super useful.
Try It Out!
You can play with the chatbot here: https://persona-ai-lake.vercel.app/ . Ask it about coding, life advice, or just say “Haanjii!” to see how it responds.
This project was a fun way to combine AI, web development, and a bit of Hitesh’s charm. If you’re inspired to build something similar, check out Hitesh’s channels for coding tips: Chai aur Code and Hitesh Choudhary . Keep coding, and chai peete rahe!
Subscribe to my newsletter
Read articles from AKASH YADAV directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
