Unlock AI Reasoning with a Chain-of-Thought Thinking Model

Aman VijayAman Vijay
6 min read

For decades, AI systems excelled at pattern recognition but struggled with step-by-step reasoning. They were sophisticated pattern matchers—non-thinking models—that generated answers without transparency or logical structure. Enter Chain-of-Thought (CoT) prompting, a technique that transforms these static systems into dynamic reasoning engines. In this article, we’ll deconstruct how CoT bridges the gap between non-thinking models and true reasoning systems.


Why Non-Thinking Models Fall Short

Traditional language models operate like black boxes:
Input → Model → Output
They predict answers based on statistical patterns in training data, not logical reasoning. For example:

Prompt:
"If a store has 12 apples and sells 3, then receives 5 more, how many apples are left?"

Non-Thinking Model Output:
14 (Correct answer but no insight into how it was derived)

Or worse:
10 (Incorrect with no explanation)

This opacity makes debugging impossible and erodes trust.


Chain-of-Thought: The Blueprint for Reasoning

CoT restructures the AI’s response by forcing it to verbalize intermediate reasoning steps, mimicking human cognition:
Input → Explicit Reasoning Steps → Output

Here’s the same problem solved with CoT:

Prompt:
"If a store has 12 apples and sells 3, then receives 5 more, how many apples are left? Let's think step by step."

CoT Output:

  1. Start with 12 apples.

  2. Sell 3: 12 - 3 = 9 apples.

  3. Receive 5: 9 + 5 = 14 apples.

  4. Final answer: 14

How to Build a Thinking Model: Two Practical Approaches

1. Zero-Shot CoT

Simply append phrases like "Let's think step by step" to your prompt. This triggers the model’s internal reasoning capability without examples.

Example Prompt:
"A bat and a ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost? Think step by step."

        async function main() {
          const response = await client.chat.completions.create({
            model: 'gemini-2.0-flash',
            messages: [
              {
                role: 'system',
                content: `
                        You're an AI assistant expert in coding with Javascript. You only and only know Javascript as coding language `,
              },
              { role: 'user', content: 'Hey, My name is XYZ' },
              {
                role: 'assistant',
                content: 'Hello XYZ! How can I assist you today?',
              },
              { role: 'user', content: 'What is my name?' },
              {
                role: 'assistant',
                content: 'Your name is XYZ. How can I help you further?',
              },
              { role: 'user', content: 'which company has trained you?' },
            ],
          });

          console.log(response.choices[0].message.content);
        }

Output:

  1. Total cost = Rs 110.

  2. Bat = Ball + 100.

  3. Substitute: (Ball + 100) + Ball = Rs 110.

  4. 2 × Ball = Rs.10 → Ball = Rs. 5.

2. Few-Shot CoT

Provide annotated examples demonstrating reasoning steps.

Prompt Template:

text

Example 1:  
Q: Alice has 5 books. She buys 3 more. How many books does she have?  
A: Start: 5 books. Add 3: 5 + 3 = 8. Answer: 8.  

Example 2:  
Q: A pizza has 8 slices. John eats 2, then Mary eats 1. How many are left?  
A: Start: 8 slices. John eats 2: 8 - 2 = 6. Mary eats 1: 6 - 1 = 5. Answer: 5.  

Target Question:  
Q: [Your Question]  
A: [Model generates CoT steps]


async function main() {

  const response = await client.chat.completions.create({
    model: 'gpt-4.1-mini',
    messages: [
      {
        role: 'system',
        content: `
                You're an AI assistant expert in coding with Javascript. You only and only know Javascript as coding language.
                If user asks anything other than Javascript coding question, Do not ans that question.


                Examples:
                Q: Hey There
                A: Hey, Nice to meet you. How can I help you today? Do you want me to show what we are upto?

                Q: Hey, I want to learn something
                A: Sure, Why don't We go through JavaScript.

                Q: I am bored
                A: What about a JS Quiz?

                Q: Can you write a code in Python?
                A: I can, but I am designed to help in JS
            `,
      },
      { role: 'user', content: 'Hey gpt' },
      {
        role: 'user',
        content: 'Hey, do you have a JS examples ?',
      },
    ],
  });

  console.log(response.choices[0].message.content);
}

main();


COT (Chain-Of-Thought)

Example -

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4.1-mini',
    messages: [
      {
        role: 'system',
        content: `
          You are a logical reasoning assistant.
          Always break down the reasoning process step-by-step before giving the final True/False answer.
        `,
      },
      {
        role: 'user',
        content: `
          The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
          A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.

          The odd numbers in this group add up to an even number: 17, 10, 19, 4, 8, 12, 24.
          A: Adding all the odd numbers (17, 19) gives 36. The answer is True.

          The odd numbers in this group add up to an even number: 16, 11, 14, 4, 8, 13, 24.
          A: Adding all the odd numbers (11, 13) gives 24. The answer is True.

          The odd numbers in this group add up to an even number: 17, 9, 10, 12, 13, 4, 2.
          A: Adding all the odd numbers (17, 9, 13) gives 39. The answer is False.

          The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
          A:
        `,
      },
    ],
  });

  console.log(response.choices[0].message.content);
}

main();

Why Does CoT Work? The Science Simplified

  1. Step Decomposition: Breaking problems into subtasks reduces cognitive load.

  2. Error Localization: Mistakes surface in intermediate steps (e.g., 9 + 5 = 15).

  3. Knowledge Retrieval: Verbalizing steps forces the model to access relevant facts/rules.

Research shows CoT improves accuracy on math, logic, and planning tasks by up to 40% (e.g., GSM8K benchmark).

The process is illustrated below:

Image Source: Zhang et al. (2022)


Challenges & Best Practices

  • Hallucinations: Models may invent incorrect steps. Mitigate with:

    • Self-consistency checks (generate multiple chains, pick the most frequent answer).

    • Hybrid approaches (e.g., Program-Aided Language Models that generate executable code).

  • Overhead: CoT increases response length. Use only for complex tasks.

  • Prompt Engineering: Start with zero-shot, then add few-shot examples if needed.


Real-World Applications

  1. Math Tutoring: Students see how to solve problems.

  2. Medical Diagnosis: Trace logic from symptoms to conclusions.

  3. Legal Analysis: Step-by-step application of statutes.


The Future: Beyond CoT

CoT is the foundation for advanced reasoning techniques like:

  • Tree of Thoughts: Explore multiple reasoning paths.

  • Self-Refine: Models critique their own CoT steps.


Conclusion

Chain-of-Thought is more than a prompting trick—it’s a paradigm shift toward transparent, trustworthy AI. By converting non-thinking models into reasoning systems, we unlock capabilities once thought exclusive to humans. Start experimenting with Let's think step by step today, and watch your models transform from calculators into thinkers.

0
Subscribe to my newsletter

Read articles from Aman Vijay directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aman Vijay
Aman Vijay

Full Stack Developer