Building Thinking Models with CoT

Hrishith SavirHrishith Savir
1 min read

Chain-of-thought simply asks the models to show its reasoning before arriving to an answer that eliminates the “immediate jump to conclusion” step taken by most LLMs.

This steps eliminates - “Hallucinations“ therefore breaking the problem statement into smaller logical steps which allows it to become better in reasoning and calculating responses.

It does not inherently “think” like a human does - however it stimulates a reasoning process which improves :

  1. Accuracy

  2. Precision

  3. Transparency

Using this we can convert any model from, auto answering mode to problem-solving mode

0
Subscribe to my newsletter

Read articles from Hrishith Savir directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Hrishith Savir
Hrishith Savir