Teaching AI to “Think”: Building a Thinking Model with Chain-of-Thought

When you ask an AI, “What’s 23 × 47?”, it might instantly give you an answer.
But here’s the catch — sometimes it’s just guessing.
Large Language Models (LLMs) are great at pattern matching, but they don’t “think” in the human sense. Out of the box, most are non-thinking models — they give you the final output without showing how they got there.
But what if you could make them reason step-by-step, just like a human solving a math problem or explaining a decision?
That’s exactly what Chain-of-Thought (CoT) prompting does.
Let’s break it down.
What’s a Non-Thinking Model?
A non-thinking model is like a student who memorizes answers from the back of the book.
You ask a question, they give the correct-looking answer — but you have no idea if they actually understood the problem.
Pros:
Fast responses.
Concise answers.
Cons:
- No transparency — you don’t know if the answer is correct until you check it yourself.
Struggles with complex, multi-step reasoning.
What’s Chain-of-Thought Prompting?
Chain-of-Thought is a prompting technique where you tell the model:
“Explain your reasoning step-by-step before giving the final answer.”
Instead of just spitting out the solution, the AI walks you through the process.
Example:
Without CoT (non-thinking):
Q: If a train travels 60 km/h for 2 hours, how far does it go?
A: 120 km.
With CoT (thinking):
Q: If a train travels 60 km/h for 2 hours, how far does it go?
A: Speed × Time = Distance.
60 × 2 = 120 km.
So, the train travels 120 km.
Why CoT Works
When you force the AI to “think out loud,” three things happen:
Fewer Mistakes – Step-by-step reasoning reduces careless errors.
Better Transparency – You can see how the answer was formed.
Complex Problem Solving – Works better for multi-step logic, math, or reasoning tasks.
Turning a Non-Thinking Model into a Thinking One
Here’s how you can upgrade any LLM into a thinking model using CoT.
1. Explicitly Ask for Steps
Simply adding “Show your reasoning step-by-step” to your prompt works wonders.
Example:
“Solve this problem and explain your reasoning step-by-step: A shop sells apples at ₹20 each. If you buy 5 apples and get a 10% discount, how much do you pay?”
2. Use Few-Shot CoT Examples
Show the AI a couple of worked examples first.
Example:
Example 1:
Q: If I have 3 pens and buy 2 more, how many do I have?
A: Start with 3 pens. Add 2 more → 3 + 2 = 5 pens. Final answer: 5.Example 2:
Q: A bus travels 50 km/h for 4 hours. How far does it go?
A: Distance = Speed × Time → 50 × 4 = 200 km. Final answer: 200.Now solve: A shop sells apples at ₹20 each...
3. Combine CoT with Role-Based Prompting
Give the model a role like “math teacher” or “data analyst” to make explanations more structured.
Example:
“You are a math tutor. Solve this step-by-step and explain like you’re teaching a student…”
4. Use CoT Iteratively (Multi-Turn)
First prompt: “Show your step-by-step reasoning.”
Second prompt: “Check your reasoning for mistakes and correct them.”
This double-pass approach increases accuracy.
When NOT to Use CoT
While CoT is powerful, it’s not always the best choice:
For very simple tasks (e.g., “Translate ‘Hello’ to French”) — it just adds unnecessary length.
When you need concise answers — CoT can make replies much longer.
In real-time systems where speed is critical — extra steps mean extra processing time.
Real-World Uses of CoT
Coding: Debugging code with step-by-step thought process.
Math & Physics: Solving word problems with clear derivations.
Business Analysis: Explaining the logic behind financial projections.
Data Science: Walking through how a prediction or classification was made.
Final Thoughts
Large Language Models don’t “think” by default. But with Chain-of-Thought prompting, you can make them reason more like a human — breaking problems into smaller steps, explaining each, and producing more reliable answers.
In short:
Without CoT → Answers.
With CoT → Answers + the reasoning behind them.
And when you can see the reasoning, you’re not just getting smarter AI outputs — you’re also becoming a better judge of their accuracy.
Subscribe to my newsletter
Read articles from Sanskar Agarwal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sanskar Agarwal
Sanskar Agarwal
I’m Sanskar Agarwal — a 3rd-year B.Tech student in Computer Science at VESIT, Mumbai, passionate about building impactful tech solutions. I enjoy turning ideas into reality through full-stack development, IoT projects, and machine learning applications. 💻 Currently learning and experimenting with the MERN stack and the Generative AI field. Lifelong learner, tech enthusiast, and a firm believer in “Build. Break. Learn. Repeat.” 📫 Let’s connect, collaborate, and share knowledge — tech grows best when it’s open!