How to Turn a Non-Thinking Model into a Thinking Model with Chain-of-Thought

Punyansh SinglaPunyansh Singla
3 min read

AI models are powerful, but sometimes they give answers that feel too direct—like they’re jumping straight to the conclusion without explaining how they got there. That’s because many models don’t naturally “think” step by step. They just map your question to an answer in one go.

But here’s the exciting part: with a simple technique called Chain-of-Thought (CoT), you can teach a non-thinking model to act like a thinking one. Let’s break this down in simple terms.


What is Chain-of-Thought?

Think of Chain-of-Thought like showing your work in math class.

Instead of only writing the final answer, you explain each step that got you there:

  • Without CoT:
    👉 Question: What’s 12 × 13?
    👉 Model: 156

  • With CoT:
    👉 Question: What’s 12 × 13?
    👉 Model: “12 × 10 = 120, 12 × 3 = 36, 120 + 36 = 156. So the answer is 156.”

See the difference? The second one feels more human—because it explains its thinking process.


Why Do We Need It?

When a model just gives the answer, you don’t know why it chose that answer. But if it explains step by step:

  • Mistakes become easier to spot

  • Answers are more reliable

  • You gain trust in the model’s reasoning

It’s like the difference between a student blurting out “42” vs. actually showing how they solved the problem.


How to Add Chain-of-Thought

The magic is surprisingly simple: you just tell the model to think step by step.

In prompt engineering, this looks like:

Q: What is 25 × 25? Think step by step.

Now the model doesn’t just give the final answer—it explains how it got there.


Turning a Non-Thinking Model into a Thinking Model

If you’re using a smaller or cheaper model (which usually doesn’t “reason” well), Chain-of-Thought can level it up. Here’s how:

  1. Add instructions → Tell it: “Let’s think step by step.”

  2. Encourage reasoning → Instead of just asking for the answer, ask it to “explain how you got this.”

  3. Use examples → Show the model a few worked-out examples (like math, logic, or even coding problems). This is called few-shot prompting.

Over time, the model starts following that same reasoning style.


Real-Life Example

Let’s say you’re building a chatbot to help students with homework.

  • Without CoT:
    Student: “What’s the capital of France?”
    Bot: “Paris”

  • With CoT:
    Student: “What’s the capital of France?”
    Bot: “France is a country in Europe. Its most famous and largest city is Paris, which is also the capital. So the answer is Paris.”

The second one sounds smarter, more helpful, and more trustworthy.


Final Thoughts

Chain-of-Thought is one of the simplest ways to make AI models feel more like “thinking” models.

  • It doesn’t require retraining

  • Works on most models

  • Makes answers clearer and easier to trust

If you’re building apps with AI—whether it’s for learning, productivity, or fun—try adding Chain-of-Thought to your prompts. You’ll be surprised how much more human your AI starts to feel.

0
Subscribe to my newsletter

Read articles from Punyansh Singla directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Punyansh Singla
Punyansh Singla