How Chain-of-Thought Turns a Non-Thinking AI into a Thinking Mode

Shivani PandeyShivani Pandey
2 min read

Thinking Model

Introduction

When you ask most AI models a question, they give a direct answer—sometimes correct, sometimes not. But what if we want the AI to reason like a human before answering?

This is where Chain-of-Thought (CoT) prompting comes in. It’s a way to make even a non-thinking model act like a thinking model by guiding it to show its reasoning process.


What is a Non-Thinking Model?

A non-thinking model is like a student who blurts out the final answer without showing their work.

  • Example: You ask, “What is 37 × 45?”

  • AI might reply directly: “1665”

✅ Quick, but ❌ no reasoning.


What is Chain-of-Thought?

Chain-of-Thought prompting tells the model:
👉 “Don’t just give the answer. Think step by step. Show your reasoning.”

So instead of blurting out:

  • It breaks the problem into smaller parts

  • Checks correctness at each step

  • Then delivers the final answer


Example: With and Without CoT

Without CoT:
User: “Solve 23 + 47 × 2”
AI: “117” (might be wrong due to missing order of operations)

With CoT:
AI:

  • Step 1: Follow BODMAS → multiply first.

  • Step 2: 47 × 2 = 94

  • Step 3: 23 + 94 = 117

  • Final Answer: 117

Here, the process is transparent and trustworthy.


How CoT Builds a Thinking Model

  1. START Phase: The model identifies the problem.

  2. THINK Phase: It explores possible solutions step by step.

  3. EVALUATE Phase: It double-checks correctness.

  4. OUTPUT Phase: It presents the final result.

This process transforms the model from a “black-box answer machine” into a “transparent problem solver.”


Code Example: Chain-of-Thought with Gemini

import "dotenv/config";
import { OpenAI } from "openai";

const client = new OpenAI({
  apiKey: process.env.GEMINI_API_KEY,
  baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});

const SYSTEM_PROMPT = `
You are an assistant who always reasons step by step using:
START → THINK → EVALUATE → OUTPUT
`;

async function main() {
  const response = await client.chat.completions.create({
    model: "gemini-2.5-flash",
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      { role: "user", content: "Solve 12 * 8 + 50" },
    ],
  });

  console.log(response.choices[0].message.content);
}

main();

This ensures the AI doesn’t just give the final number—it shows how it got there.


Why It Matters

  • Trust: You can see how the answer was derived.

  • Debugging: If AI goes wrong, you know where.

  • Better Accuracy: Step-by-step reasoning reduces silly mistakes.


Final Thoughts

Chain-of-Thought prompting is like teaching a student to show their work instead of guessing. By encouraging AI to break down problems step by step, we convert a non-thinking model into a thinking partner.

0
Subscribe to my newsletter

Read articles from Shivani Pandey directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shivani Pandey
Shivani Pandey