Chain-of-Thought


Introduction
AI models can be powerful, but not all of them are great at thinking. Some models are like parrots — repeating patterns they’ve seen. Others can actually reason, break down problems, and explain their answers.
The difference often comes down to a technique called Chain-of-Thought (CoT). In this article, we’ll explore how to use CoT to turn a non-thinking model into a thinking one — without retraining the model.
1. Non-Thinking vs. Thinking Models
Think of it like students:
Non-thinking model: Memorizes answers but can’t explain them.
Thinking model: Solves problems step-by-step and explains the process.
Example:
Non-thinking model:
Q: What’s 3 × (4 + 2)?
A: 18
Thinking model:
Q: What’s 3 × (4 + 2)?
A: Let’s think step-by-step.
Step 1: 4 + 2 = 6
Step 2: 3 × 6 = 18
Final Answer: 18
2. What is Chain-of-Thought (CoT)?
Chain-of-Thought is a prompting method where the AI is guided to produce intermediate reasoning steps before giving the final answer.
It’s like asking the model to “think out loud.”
Without CoT:
Q: If you have 10 chocolates and eat 4, how many are left?
A: 6
With CoT:
Q: If you have 10 chocolates and eat 4, how many are left?
A: Let’s think step-by-step.
Step 1: Start with 10 chocolates.
Step 2: Eat 4 chocolates.
Step 3: 10 - 4 = 6.
Final Answer: 6
3. How to Turn a Non-Thinking Model into a Thinking Model
You don’t need a special model — you just need to prompt it correctly.
Step 1 — Give explicit reasoning instructions
Tell the model:
“Think step-by-step before answering.”
Or:
“Explain your reasoning, then give the final answer.”
Step 2 — Use Few-Shot Examples
Provide a couple of examples so the model learns your desired output style.
Example:
Q: 5 + 3 × 2 = ?
A: Step 1: 3 × 2 = 6
Step 2: 5 + 6 = 11
Final Answer: 11
Q: A car travels 120 km in 2 hours. What’s the speed in km/h?
A: Step 1: Speed = distance / time
Step 2: 120 / 2 = 60
Final Answer: 60
Step 3 — Decide on visible or hidden CoT
Visible CoT: Show the reasoning to the user (good for education or debugging).
Hidden CoT: Model reasons internally, but you only display the final answer.
4. Why CoT Works
CoT improves performance because it forces the model to break down the problem into smaller, logical steps.
Benefits include:
Fewer mistakes on multi-step problems
Easier debugging
More human-like reasoning
5. Best Practices
Always give clear reasoning instructions.
Avoid unnecessary verbose thinking.
For production, consider hidden CoT to keep answers clean.
Always verify answers — models can “think wrong” confidently.
Conclusion
By adding Chain-of-Thought prompting, you can turn almost any model into a “thinking” model.
It’s a low-effort, high-impact way to make AI more accurate and explainable — without retraining.
Subscribe to my newsletter
Read articles from Shubham Mourya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
