Talk Smart to AI: How Prompting Helps You Escape the GIGO Trap.


We’ve all heard the old programming adage: GIGO — Garbage In, Garbage Out. In the age of AI, it’s more relevant than ever. Large language models like ChatGPT or Gemini are incredibly powerful, but they’re only as good as the prompts they receive. If you feed the model vague, messy, or poorly structured inputs, don’t be surprised when the output is confusing, irrelevant, or just plain wrong.
Prompting is the new keyboard shortcut for controlling AI — a mix of art and logic that turns raw potential into useful results. Whether you're a developer, writer, student, or entrepreneur, learning how to guide AI through well-crafted prompts is the key to unlocking its full value.
In this blog, we’ll break down:
What is prompting?
Different styles of prompting — from basic to advanced.
Proven techniques to get better, faster, and more relevant results from AI.
What is Prompting?
Prompting is the act of giving instructions or input to an AI model to generate a desired output.
Think of it like talking to a super-smart assistant — your words (the prompt) guide what the assistant does. Whether you ask it to write a poem, explain code, summarize an article, or plan your day, the AI responds based on how you ask.
In simple terms:
The prompt is your command or question.
The response is the AI’s answer or action.
Prompting is not just about what you ask, but how you ask it. A small tweak in wording, structure, or tone can completely change the quality and relevance of the output.
Prompting Styles: The 3 Core formats developers should aware of
When working with AI models prompt structure matters. Different models expect different input formats to interpret instructions correctly. Here are the three most common prompting styles developers should be aware of:
1. Alpaca Prompt
Originally used in Stanford’s Alpaca model (a fine-tuned LLaMA), this format is simple and human-readable.
Structure:
### Instruction: Explain what a closure is in JavaScript.
### Response: A closure is...
Use Case: Great for instruction-following models trained in a Q&A or tutoring style.
2. ChatML Format (Used by OpenAI Models)
Popularized by OpenAI (e.g., GPT-3.5, GPT-4), this format mimics a multi-turn chat structure using system, user, and assistant roles.
Structure:
{ "role": "system", "content": "...."},
{ "role": "user", "content": "...."},
{ "role": "assitant", "content": "...."},
Here you can see the use of the system role. This role acts like a control panel for developers — it lets us steer the AI’s behavior in a specific direction that suits our use case. As developers, we rarely want a generic, one-size-fits-all AI. We want responses tailored to our product, tone, or domain.
In this blog, we’ll mainly use the system role to show how you can shape the AI’s thinking and output to match your exact needs.
Use Case: Ideal for conversational models and chat-based interactions. Supports roles, instructions, and better context management.
3. INST Format (Instruction Format)
Used in models like Vicuna or FastChat, INST format wraps instruction and response in special tokens.
Structure:
[INST]What is programming?[/INST]
[RESP]Programming is the process of ....[/RESP]
Use Case: Often used in open-source fine-tuned models that expect structured tokens to distinguish input from output.
Prompting Techniques
Now that we understand what prompting is, let’s explore some core techniques used to get better and more reliable responses from AI models. These techniques help you guide the model more effectively, depending on the task complexity and desired output.
Here are the four main prompting techniques:
1. Zero-Shot Prompting (a.k.a. One-Shot)
In zero-shot prompting, you simply ask the AI a question or give a task without any example. It relies entirely on the model's pre-training.
from openai import OpenAI
from dotenv import load_dotenv
from openai.types.chat import ChatCompletionMessageParam
load_dotenv()
client = OpenAI()
SYSTEM_PROMPT = """
You are a helpful and experienced math teacher.
Explain concepts clearly and step-by-step, using simple language when needed.
Always encourage the student and make sure they understand the logic behind each step.
If the question involves calculation, show your working. Be calm, supportive, and never assume prior knowledge beyond what's asked.
"""
messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": SYSTEM_PROMPT},
]
while True:
prompt = input("Query > ")
if prompt.lower() in {"exit", "quit"}:
print("Exiting...")
break
messages.append({"role": "user", "content": prompt})
res = client.chat.completions.create(
model="gpt-4.1-mini",
messages=messages,
)
model_content = res.choices[0].message.content
messages.append(
{"role": "assistant", "content": res.choices[0].message.content}
)
print("Teacher GPT: ", model_content)
# Output
#
# Query > 5 * 3 / 5 + 24 * 3
# Teacher GPT: Let's work through the expression step by step:
#
# The expression is:
#
# \[ 5 \times 3 \div 5 + 24 \times 3 \]
#
# ### Step 1: Follow the order of operations (PEMDAS/BODMAS)
# - Multiplication and division come before addition.
# - Do multiplication and division from left to right.
#
# ### Step 2: Calculate \( 5 \times 3 \)
# \[ 5 \times 3 = 15 \]
#
# Expression becomes:
# \[ 15 \div 5 + 24 \times 3 \]
#
# ### Step 3: Calculate \( 15 \div 5 \)
# \[ 15 \div 5 = 3 \]
#
# Expression becomes:
# \[ 3 + 24 \times 3 \]
#
# ### Step 4: Calculate \( 24 \times 3 \)
# \[ 24 \times 3 = 72 \]
#
# Expression becomes:
# \[ 3 + 72 \]
#
# ### Step 5: Add \( 3 + 72 \)
# \[ 3 + 72 = 75 \]
#
# ### Final answer:
# \[
# \boxed{75}
# \]
#
# If you have any questions about any of these steps, feel free to ask! You're doing great!
# Query > exit
# Exiting...
2. Few-Shot Prompting
Here, you give the model a few examples before your actual query. This helps it learn the pattern you want it to follow — all within the same prompt.
from openai import OpenAI
from dotenv import load_dotenv
from openai.types.chat import ChatCompletionMessageParam
load_dotenv()
client = OpenAI()
SYSTEM_PROMPT = """
You are a helpful and experienced math teacher.
Explain concepts clearly and step-by-step, using simple language when needed.
Always encourage the student and make sure they understand the logic behind each step.
If the question involves calculation, show your working. Be calm, supportive, and never assume prior knowledge beyond what's asked.
Examples:
User: What is the area of a triangle with base 10 cm and height 5 cm?
Assistant: To find the area of a triangle, we use the formula:
Area = ½ × base × height
Here, base = 10 cm and height = 5 cm.
So,
Area = ½ × 10 × 5 = 25 cm²
The area of the triangle is 25 square centimeters.
Examples:
User: Can you explain what a prime number is?
Assistant: Of course! A prime number is a number that has exactly two factors: 1 and itself.
That means it can only be divided evenly by 1 and the number itself.
For exmaple: 2 is prime, 3 is prime, 4 is not prime.
"""
messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": SYSTEM_PROMPT},
]
while True:
prompt = input("Query > ")
if prompt.lower() in {"exit", "quit"}:
print("Exiting...")
break
messages.append({"role": "user", "content": prompt})
res = client.chat.completions.create(
model="gpt-4.1-mini",
messages=messages,
)
model_content = res.choices[0].message.content
messages.append(
{"role": "assistant", "content": res.choices[0].message.content}
)
print("Teacher GPT: ", model_content)
# Output
#
# Query > 5 * 3 / 5 + 24 * 3
# Teacher GPT: Let's solve the expression step-by-step:
# \[5 \times 3 \div 5 + 24 \times 3\]
#
# Step 1: Multiply \(5 \times 3\)
# \[5 \times 3 = 15\]
#
# Now the expression becomes:
# \[15 \div 5 + 24 \times 3\]
#
# Step 2: Divide \(15 \div 5\)
# \[15 \div 5 = 3\]
#
# Now the expression becomes:
# \[3 + 24 \times 3\]
#
# Step 3: Multiply \(24 \times 3\)
# \[24 \times 3 = 72\]
#
# Now the expression becomes:
# \[3 + 72\]
#
# Step 4: Add \(3 + 72\)
# \[3 + 72 = 75\]
#
# So, the value of the expression is **75**.
#
# If you want, I can explain any part again! You're doing great!
3. Chain-of-Thought Prompting (CoT)
This technique encourages the model to show its reasoning step by step instead of jumping directly to the final answer. It improves accuracy, especially for logic or math-heavy tasks. Here we make the model to act like human. We human first sees the problem, understand, then works on it, comes with output and last we validates.
from openai import OpenAI
from dotenv import load_dotenv
import json
from openai.types.chat import ChatCompletionMessageParam
load_dotenv()
client = OpenAI()
SYSTEM_PROMPT = """
You are a helpful and experienced math teacher.
Explain concepts clearly and step-by-step, using simple language when needed.
Always encourage the student and make sure they understand the logic behind each step.
If the question involves calculation, show your working. Be calm, supportive, and never assume prior knowledge beyond what's asked.
The steps are you get a user input, you analyse, you think, you think again, and think for several times and then return the output with an explanation.
Follow the steps in sequence that is "analyse", "think", "output", "validate" and finally "result".
Rules:
1. Follow the strict JSON output as per schema.
2. Always perform one step at a time and wait for the next input.
3. Carefully analyse the user query,
Output Format:
{{ "step": "string", "content": "string" }}
Example:
Input: What is 2 + 2
Output: {{ "step": "analyse", "content": "Alight! The user is interest in maths query and he is asking a basic arthematic operation" }}
Output: {{ "step": "think", "content": "To perform this addition, I must go from left to right and add all the operands." }}
Output: {{ "step": "output", "content": "4" }}
Output: {{ "step": "validate", "content": "Seems like 4 is correct ans for 2 + 2" }}
Output: {{ "step": "result", "content": "2 + 2 = 4 and this is calculated by adding all numbers" }}
"""
messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": SYSTEM_PROMPT},
]
while True:
prompt = input("Query > ")
if prompt.lower() in {"exit", "quit"}:
print("Exiting...")
break
messages.append({"role": "user", "content": prompt})
while True:
res = client.chat.completions.create(
model="gpt-4.1-mini",
response_format={"type": "json_object"},
messages=messages,
)
messages.append(
{"role": "assistant", "content": res.choices[0].message.content}
)
if res.choices[0].message.content is None:
print("⚠️ Assistant returned no content.")
continue
parsed_response = json.loads(res.choices[0].message.content)
if parsed_response.get("step") != "result":
print("Think:", parsed_response.get("content"))
continue
print("Parsed response:", parsed_response.get("content"))
break
# Output
#
# Query > 5 * 3 / 5 + 24 * 3
# Think: The user has provided an arithmetic expression involving multiplication, division, and addition: 5 * 3 / 5 + 24 * 3. The question is to evaluate this expression step-by-step.
# Think: I need to remember the order of operations: multiplication and division are performed first from left to right, then addition. So I will first calculate 5 * 3, then divide by 5, and also calculate 24 * 3. Finally, I'll add the results together.
# Think: Calculate 5 * 3 = 15. Then, 15 / 5 = 3. Next, calculate 24 * 3 = 72. Finally, add 3 + 72 = 75.
# Think: I have followed the order of operations carefully, and the calculations look correct: 5*3=15, 15/5=3, 24*3=72, and 3+72=75.
# Parsed response: The value of the expression 5 * 3 / 5 + 24 * 3 is 75. We first multiplied and divided as per order of operations, then added the final results.
# Query > exit
# Exiting...
4. Self-Consistency Prompting
The main goal of this prompting is to get multiple responses from multiple models and use one model to compare all the responses to make the final response accurate.
Use Case: Tasks where accuracy and reliability matter — like exams, coding problems, or fact-checking.
from openai import OpenAI
from dotenv import load_dotenv
from openai.types.chat import ChatCompletionMessageParam
load_dotenv()
client = OpenAI()
SYSTEM_PROMPT = """
You are a helpful and experienced math teacher.
Explain concepts clearly and step-by-step, using simple language when needed.
Always encourage the student and make sure they understand the logic behind each step.
If the question involves calculation, show your working. Be calm, supportive, and never assume prior knowledge beyond what's asked.
Examples:
User: What is the area of a triangle with base 10 cm and height 5 cm?
Assistant: To find the area of a triangle, we use the formula:
Area = ½ × base × height
Here, base = 10 cm and height = 5 cm.
So,
Area = ½ × 10 × 5 = 25 cm²
The area of the triangle is 25 square centimeters.
Examples:
User: Can you explain what a prime number is?
Assistant: Of course! A prime number is a number that has exactly two factors: 1 and itself.
That means it can only be divided evenly by 1 and the number itself.
For exmaple: 2 is prime, 3 is prime, 4 is not prime.
"""
messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": SYSTEM_PROMPT},
]
while True:
prompt = input("Query > ")
if prompt.lower() in {"exit", "quit"}:
print("Exiting...")
break
messages.append({"role": "user", "content": prompt})
nano_res = client.chat.completions.create(
model="gpt-4.1-nano",
messages=messages,
)
print("nano teacher GPT: ", nano_res.choices[0].message.content)
mini_res = client.chat.completions.create(
model="gpt-4.1-mini",
messages=messages,
)
print("mini teacher GPT: ", mini_res.choices[0].message.content)
comparison_prompt = f"""
User: {prompt}
Nano Answer: {mini_res.choices[0].message.content}
Mini Answer: {nano_res.choices[0].message.content}
I have two answer one from Nano and another from Mini of the above question. I want you to compare these answers.
Cases:
Case 1: Both are correct answer.
Assistant:"Both are correct answers...." and then choose any one answer from above.
Case 2: Any one answer is correct. Let's say Nano is correct.
Assistant:"Nano answer is correct...." and then choose Nano's answer from above.
Case 3: Any one answer is correct. Let's say Mini is correct.
Assistant:"Mini answer is correct...." and then choose Mini's answer from above.
Case 4: Both answers are wrong.
Assistant:"Both answers are wrong..." and then give your answer for the User question.
"""
res = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": comparison_prompt}
],
)
messages.append(
{"role": "assistant", "content": res.choices[0].message.content}
)
print("Teacher GPT: ", res.choices[0].message.content)
# Output
# Query > 5 * 3 / 5 + 24 * 3
# nano teacher GPT: Let's solve this step by step, following the order of operations (PEMDAS/BODMAS):
#
# 1. First, handle the multiplication and division from left to right:
# - \( 5 \times 3 = 15 \)
# - \( 15 \div 5 = 3 \)
# - \( 24 \times 3 = 72 \)
#
# 2. Now, the expression becomes:
# \[
# 3 + 72
# \]
#
# 3. Finally, add:
# \[
# 3 + 72 = 75
# \]
#
# So, the answer is **75**!
#
# mini teacher GPT: Let's work through the expression step-by-step:
# The expression is:
# 5 * 3 / 5 + 24 * 3
#
# Step 1: Multiply 5 by 3
# 5 * 3 = 15
#
# Now the expression is:
# 15 / 5 + 24 * 3
#
# Step 2: Divide 15 by 5
# 15 / 5 = 3
#
# Now the expression is:
# 3 + 24 * 3
#
# Step 3: Multiply 24 by 3
# 24 * 3 = 72
#
# Now the expression is:
# 3 + 72
#
# Step 4: Add 3 and 72
# 3 + 72 = 75
#
# So, the result of the expression is **75**.
#
# If you want, I can explain any step in more detail!
#
# Teacher GPT: Both are correct answers. Both the Nano and Mini answers correctly solve the expression step-by-step and get the final answer of 75.
#
# Here is one of the answers, chosen from above (Nano):
#
# Let's work through the expression step-by-step:
#
# The expression is:
# 5 * 3 / 5 + 24 * 3
#
# Step 1: Multiply 5 by 3
# 5 * 3 = 15
#
# Now the expression is:
# 15 / 5 + 24 * 3
#
# Step 2: Divide 15 by 5
# 15 / 5 = 3
#
# Now the expression is:
# 3 + 24 * 3
#
# Step 3: Multiply 24 by 3
# 24 * 3 = 72
#
# Now the expression is:
# 3 + 72
#
# Step 4: Add 3 and 72
# 3 + 72 = 75
#
# So, the result of the expression is **75**.
#
# If you want, I can explain any step in more detail!
# Query > exit
# Exiting...
5. Persona-based Prompting
Persona-based prompting is a technique where you guide the AI to adopt a specific role or identity — like a math teacher, doctor, interviewer, mentor, or even a sarcastic friend.
It builds on zero-shot or few-shot prompting, but adds a personality layer to control the tone, style, and depth of the AI's responses.
For more accurate and consistent behavior, we often use few-shot prompting with 40–80 carefully crafted examples, helping the AI learn to respond exactly like the persona we want to create.
Conclusion: Key Takeaways on Prompting for Developers
Prompting is your control panel for AI — it lets you guide the model’s behavior with simple, structured input.
Garbage In, Garbage Out (GIGO) applies strongly — well-crafted prompts = high-quality output.
Prompting styles matter:
Alpaca for simple instructions
ChatML for chat-based interaction (used by most APIs)
INST format for token-based open models
ChatML is the current standard — most developers use it, and it's auto-converted internally for many models.
The
system
role is powerful — use it to set tone, behavior, or personality of the AI for your specific use case.Use advanced techniques for better accuracy:
Zero-shot for quick tasks
Few-shot for pattern learning
Chain-of-Thought for step-by-step logic
Self-consistency for reliability in reasoning
Persona-Based for controlling tone, style, and behavior by assigning the AI
Prompting is both a skill and a tool — the more you practice, the more precise and useful your AI outputs become.
Subscribe to my newsletter
Read articles from Karan Shaw directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Karan Shaw
Karan Shaw
I am Karan Shaw. I am MERN Stack developer exprienced in building responsive user interfaces using ReactJs and NextJs and developing backend services using MongoDB and ExpressJs