How I Made My RAG Bot Understand What You Actually Mean

Ever built something that technically works but practically flops when it really matters? Yeah, that was me — trying to make a smart RAG chatbot… and realizing it didn’t always “get” what users were really asking.
🤔 The Problem
I was working on a RAG-based assistant — standard stuff:
🔹 User asks a question
🔹 We embed it,
🔹 Retrieve chunks,
🔹 Feed it to the LLM
for a response.
It was decent, until I tested it with slightly vague or ambiguous questions.
Like imagine someone asks:
“What is FS?”
And boom — it gives a long explanation of the fs
module in Node.js.
But what if the user actually meant:
“What is the
fs
module used for?”“How do I handle errors with
fs
?”“What are the dependencies or alternatives?”
“What even is a module?”
The bot had no idea. It just took the prompt literally.
🔍 What I Tried (and Loved)
So I did what any curious dev would do: went into research rabbit holes for a weekend. Found some really cool concepts — and implemented them.
🧪 1. Prompt Expansion: Think Like the User
Instead of sticking to what the user typed, I started expanding the question behind the scenes.
If the input is:
“What is fs?”
I auto-generate 4–5 related interpretations like:
“What is the fs module in Node.js?”
“What are common issues with fs?”
“What’s the use-case of fs?”
“Alternatives to fs module?”
“What is a module in programming?”
Then I retrieve docs for each of these, embed them, and...
⚖️ 2. LLM-as-a-Judge: Let AI Pick the Smartest Interpretation
Now comes the fun part:
I let the LLM itself decide which one of these interpretations best matches the user’s intent.
Basically, the LLM becomes a judge, not just a responder.
It looks at all the retrieved chunks and picks the most relevant one to answer from. And boom — way better accuracy, even for fuzzy questions.
📚 What I Learned
This whole experience taught me something big:
Sometimes the fix isn’t better data or a new model…
It’s just thinking more like the user.
I’m still learning, but this small change made a huge difference in how useful the chatbot felt.
If you’re working on RAG or any question-answer system — try this. It’s simple, but powerful.
Happy to chat if anyone’s trying something similar! 💬
Subscribe to my newsletter
Read articles from Priyansh Padhi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
