From Step-Back to Chain-of-Thought: Navigating the Abstraction Spectrum š§ āļø

Table of contents
- Quick Recap: What Step-Back Prompting Does š
- š§ What Is Chain-of-Thought Prompting?
- How Chain-of-Thought Prompting Actually Works (with Diagram Reference)
- āļø Whatās Actually Happening Under the Hood
- ā Benefits of Chain-of-Thought Prompting
- ā ļø Limitations of Chain-of-Thought Prompting (and How to Handle Them)
- š§ When Should You Use Chain-of-Thought Prompting?
- š§© Final Thoughts

In my last post, I shared a technique called Step-Back Promptingābasically, a way to get a large language model (LLM) to zoom out and think in broad strokes before solving a complex problem. This time, Iām looking at something that works in the opposite direction: Chain-of-Thought (CoT) Prompting.
If Step-Back is like sketching a blueprint, CoT is like assembling the structure piece by piece.
Quick Recap: What Step-Back Prompting Does š
With Step-Back Prompting, you guide the model to:
Zoom Out ā Get a top-level overview instead of rushing into details.
Frame the Problem ā For example: āBreak down the key parts of a secure web app.ā
Explore Each Piece ā Then go into each part one-by-one in follow-up prompts.
Itās helpful when youāre building something with multiple moving partsālike systems, workflows, or plans.
š§ What Is Chain-of-Thought Prompting?
Chain-of-Thought Prompting is a technique where AI answers a question step by step, instead of all at once.
You start by breaking the big question into smaller, easier parts. The AI solves each part one at a time, using its previous answers to help with the next. Finally, it puts all the steps together to give one clear, final answer.
Think of it like solving a math problem by showing your workāeach step helps build the full solution.
How Chain-of-Thought Prompting Actually Works (with Diagram Reference)
Letās say a user sends a complex question to the AI, like:
āHow can I build an effective machine learning model?ā
Instead of trying to answer that all at once, Chain-of-Thought (CoT) breaks the question down and walks through it step by stepājust like whatās shown in your diagram.
š§© Step-by-Step Walkthrough:
Query Decomposition
- The first thing the system does is split the user query into smaller, ordered sub-questionsāsay 5 parts.
Step-by-Step Execution
For sub-question 1, the AI generates embeddings, processes the context, and gives an answer.
Then it uses that answer as part of the context for sub-question 2.
This repeats for all steps.
Each new answer is influenced by everything the model has seen beforeājust like a conversation history.
Context Propagation
The cool part is: answers arenāt isolated. The AI doesnāt forget what it just said.
In the diagram, you can see how the outputs from earlier steps feed into the next stage .
Final Synthesis
Once all the sub-questions are answered, the model combines all the responses to form a single, final answer.
Thatās what you see on the right in the image, where everything flows into one unified output.
āļø Whatās Actually Happening Under the Hood
Embeddings at Every Step
Text is converted into embeddingsāmathematical vectors that capture meaning. This lets the model āunderstandā relationships between ideas, not just the literal words.Sequential Reasoning
Chain-of-Thought isnāt just Q&A. Itās structured reasoning, where each answer informs the next, just like how a person would think out loud while solving a complex task.Memory-Like Context
The model doesnāt forget. It carries forward everything itās said so far, allowing the conversation (or problem-solving) to stay on track and build logically.
ā Benefits of Chain-of-Thought Prompting
1. Improves Accuracy on Complex Tasks
By breaking a big problem into smaller steps, CoT reduces the chances of the model making logical mistakes or jumping to wrong conclusions.
2. Transparent Reasoning
You can see exactly how the model reaches its final answer. This makes it easier to debug, understand, and trust the output.
3. More Control Over Thought Process
You get to control the flowāwhat gets asked first, what builds on what, and how deep each step goes. Itās like directing the modelās brain one move at a time.
4. Reusability of Steps
Individual step responses can be reused, modified, or improved without starting overāuseful when iterating or refining.
5. Scales Well for Procedural Problems
It works great for problems that follow logical sequencesālike coding workflows, algorithms, or decision-making tasks.
ā ļø Limitations of Chain-of-Thought Prompting (and How to Handle Them)
1. Slower Execution
Since each step is a separate prompt, it takes more time.
š” Solution: Use CoT only when the task is complex enough to need it.
2. Prompt Management Overhead
You need to carefully design the sub-questions.
š” Solution: Use a clear and repeatable template to structure your CoT prompts.
3. Error Propagation
Mistakes in early steps can affect later ones.
š” Solution: Review intermediate answers before moving to the next step.
4. Overkill for Simple Tasks
Some problems donāt need step-by-step reasoning.
š” Solution: Start simpleāuse CoT only if a direct prompt gives weak or vague answers.
5. Context Window Limits
Too many steps can exceed the modelās memory.
š” Solution: Keep it conciseāaim for 5ā7 steps max, or summarize earlier answers mid-way.
š§ When Should You Use Chain-of-Thought Prompting?
Use CoT when the problem needs step-by-step thinking, not just a quick answer. Itās great for anything that involves logic, process, or multiple options.
ā Use Case 1: Technical Troubleshooting
Why is my laptop not charging?
The model can walk through steps like checking the adapter, the outlet, and battery healthāone by one.
ā Use Case 2: Comparing Options
Which phone should I buy under $500?
CoT helps the model compare specs, use cases, and reviews step by step before suggesting the best fit.
ā Use Case 3: Explaining Concepts
How does a washing machine work?
The AI breaks it into partsāwater intake, spinning, drainingāand explains each clearly in order.
š§© Final Thoughts
Chain-of-Thought prompting gives you more control, clarity, and accuracyāespecially when you're working on something that needs structured reasoning or teaching. It's not the fastest method, but it's one of the most reliable for complex tasks.
If you're building with GenAI, try using CoT when:
Youāre stuck on a multi-part problem
You want the AI to āshow its workā
A single-shot answer just isnāt enough
Itās like coding with commentsāyou understand the process, not just the output.
Subscribe to my newsletter
Read articles from Rohit Gupta directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
