Garbage Results? Fix Your Prompt Not the Model

Buddy CoderBuddy Coder
9 min read

Riya fell into a GIGO Trap

Riya, a second-year student at Delhi University, was super excited to use an AI tool for her political science assignment. She typed:

“Explain democracy.”

Bas kya? What she got was a long, textbook-style essay that didn’t match her topic or vibe at all. That’s when it hit her: “Garbage In, Garbage Out” (GIGO).
If your prompt is too generic or vague, the AI will also give you bina context wala responses.

Whether you're in a coding lab or writing an essay, the rule is the same:
Agar input thoda confusing hoga, toh output bhi utna hi random niklega.

Welcome to the GIGO world—prompt smartly, get results clearly. 😄

What Is a Prompt? 🤔

A prompt is the input you give an LLM (Large Language Model)—it could be a question, an instruction, or some context. The model uses that text to decide what to generate next. Crafting good prompts is Prompt Engineering - the art of designing inputs that gives optimal output.

Garbage In, Garbage Out 💩

In AI, GIGO means poor‑quality or ambiguous prompts producing poor answers.

  • ❌ Bad Prompt:

    "Tell me about climate change."

    ✅ Good Prompt:

    "Write a 500-word argumentative essay on how climate change affects developing countries, using recent examples and statistics and graphs."

By specifying length, focus, and style, you guide the model toward useful, relevant content

1. Zero Short Prompting

📚 Meet Tanvi — The One-Shot Person

Tanvi, a design student at NIFT Delhi, wanted to create a tagline for her new recycled clothing brand. She opened ChatGPT and typed:

"Give me a tagline for my brand."

Boom. The AI replied with:
"Style with Purpose."

Not bad. She hadn’t trained it, given examples, or explained much — yet the AI understood and gave something relevant. That’s Zero-Shot Prompting!


💡 What is it, exactly?

Zero-Shot Prompting means you're asking the AI to do something without giving it any prior examples or detailed context. You're just... shooting your shot 🏹

And sometimes, it works. Sometimes, it's “thoda off.” Why? Because without enough context, the AI is guessing your intent based on patterns it knows.

👎Bad Zero Shot Prompt

"Write a poem."

Result? You might get a super random poem — not in your desired style or mood.

👍 Better Zero-Shot Prompt:

"Write a short romantic poem in the style of Mahadevi Varma, about long-distance love."

Even though it’s still zero-shot (no examples given), it has clarity, direction, and vibe. That’s the trick.

🧠 Zero-Shot Prompting is great when:

  • You want quick results

  • You’re doing simple tasks

  • You know how to phrase things clearly


2. Few-Shot Prompting 🚀

👩‍🎤 Meet Karan — The Copywriter in Crisis

Karan, a BBA student from Pune, was helping his friend write advertisement copies for a college fest. He asked ChatGPT:

“Write a fun caption for an Instagram post.”

AI gave a super dump response:
"Join us for fun and games!"
Bhai, ye to 90’s ke pamphlet ki tarh sound kar raha hai 😩

Then he tried this:

✅ Few-Shot Prompt (with examples):

"Here are some examples of the tone I want:

'Your weekend plans? Cancelled. College Fest is calling!'
'The only ‘class’ you’ll attend this week — the DJ one.'

Now write one for our college fest advt.

Boom! Now the AI got the vibe and replied with:
"Books down, hands up — it's fest season, baby!"

That’s Few-Shot Prompting:
➡️ Give a few examples, set the tone, and AI learns what you want.


3. ⛓️Chain-of-Thought Prompting ⛓️

🤓 Say hello to Meera — the Logic Queen

Meera, an engineering student from Chennai, was using ChatGPT to solve a tricky logical reasoning problem:

“A train leaves city A at 5pm... When will it reach city B?”

The AI gave a wrong answer — fast but incorrect.
She realized: “AI needs to think through this.”

So she rephrased:

✅ Chain-of-Thought Prompt:

"Let’s solve this step by step. First, calculate the speed. Then, the distance. Finally, the time taken. The train leaves at 5pm from City A, distance is 300km, speed is 60km/h."

This time AI explained each step clearly and got it right.
That’s Chain-of-Thought Prompting
➡️ You ask the model to “think aloud” step-by-step like a human would.

const { OpenAI } = require("openai");
const openai = new OpenAI();
const prompt = `
Q: If 3 people can paint a fence in 4 hours, how long for 6 people?
Let's think step by step breaking problem to pieces.
`;
(async () => {
  const completion = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: prompt }]
  });
  console.log(completion.choices[0].message.content);
})();

4. Self‑Consistency Prompting 🔄

Generate multiple Chain Of Thought answers, then pick the most consistent and common final answer. This “voting” approach reduces mistakes and improves reliability.

👨‍💼 Meet Dev — The UPSC Aspirant with Trust Issues 😅

Dev from Lucknow was using ChatGPT to practice answer writing for UPSC. He asked:

"Explain the difference between socialism and capitalism in 150 words."

First answer? Good.
Second try? A little different.
Third try? Even more different.

Dev scratched his head:
“Kaunsa sahi hai bhai? Har baar kuch aur bol raha hai!”

Then he found the hack:
💡 Ask the AI the same question multiple times, and compare the answers to find the most consistent or insightful one.

Prompt:
"Give three different answers explaining socialism vs capitalism for a UPSC answer (150 words each)."

Then Dev analyzed them like a topper:

  • This one is too vague ✔️

  • This one has examples, nice! 💯

This one got too ideological ❌


💡 What is Self-Consistency Prompting?

Instead of relying on one single response, you:

  • Ask the same question multiple times

  • Collect multiple outputs

  • Pick the most accurate, logical, or consistent one

It’s like taking an exam thrice and submitting the best answer. 😉


5. 🎭Persona-based Prompting

💡 What is Persona-Based Prompting?

It’s when you ask the AI to take on a character or role — a persona — while answering.
That changes how the AI talks: the tone, attitude, style, and even vocabulary.
You give the AI a role or personality to respond as:

  • A teacher

  • A comedian

  • A YouTuber

  • A 5-year-old explaining quantum physics 😅

🎤 Example Prompt:

“Give me career advice like you’re Shah Rukh Khan in a TED Talk.”

Here, you didn’t just ask for advice. You told the AI:

  • Who it is (SRK 👑)

  • What it’s doing (giving a TED Talk 🎙️)

  • What it’s talking about (career advice 💼)

So instead of a generic boring response “Work hard, stay focused” response, you get majedar response like :

The AI’s Response (in SRK’s Persona):

"Life is a film, and you are its hero. No role is too small... Career ka interval aa sakta hai, lekin climax aapke haathon mein hota hai."

Just say:

“Explain [topic] as if you're [persona]…”
and watch the magic happen!


6.🎭 What is Role-Playing Prompting?

In Role-Playing Prompting, you assign both yourself and the AI a role — like actors in a scene.
You’re not just asking questions — you’re simulating a real conversation or situation.

It’s super helpful for:

  • Interview practice

  • Customer service training

  • Mock debates

  • Language learning

🎓 Example: UPSC Interview Roleplay

Prompt:

“You are a UPSC interview board member. I am a candidate appearing for my final round. Ask me questions one by one, and wait for my responses before continuing.”

Suddenly, AI becomes Sir/Ma'am from the panel:

“Good morning. Please introduce yourself and tell us why you chose Public Administration as your optional subject.”

And you're in the hot seat. Can practice 😅


7.🧠 Contextual Prompting

"AI, tu pehle ki baat yaad rakhna." 🧩

What it is ?

Contextual Prompting means giving the AI a series of related prompts — building on earlier ones. You treat the AI like a conversation partner that can remember and refer back to earlier parts of your conversation.

💬 Example:

Prompt 1:

“Explain the concept of Artificial Neural Networks in simple terms.”

AI explains nicely.

Prompt 2:

“Now compare that with how the human brain works.”

The AI remembers what it told you before and connects the dots. You didn’t have to re-explain!


8. Multimodal Prompting 🖼️

"Text se kaam nahi chalega, ab visuals bhi chahiye."

🌟 What it is:

Multimodal prompting is when you use more than one type of input — like combining text + image, or even audio or code snippets, depending on the AI you're using.

📷 Example:

Prompt:

[Upload a photo of a math problem]
“Solve this equation and explain it step-by-step.”

Or…

Prompt:

[Upload an pdf]
“Summarize this data for a college presentation. Make it sound smart but chill.”

The AI reads both what it sees and what you say, then replies intelligently.


📌 Quick Recap:

Prompting StyleWhat It DoesBest For
Zero-ShotNo examples, just straight-up taskSimple queries, quick tasks
Few-ShotYou give a few examplesTone matching, creative work
Chain-of-ThoughtAsk it to explain step-by-stepLogic, math, reasoning-based problems
Self-ConsistencyUses multiple reasoning paths to find consensusComplex logic, improving answer reliability
Persona BasedAI takes on a specific character or styleRole playing, customer service, storytelling
Role PlayingSimulates characters or interactionsSimulations, training, entertainment
ContextualKeep building on previous questionsDeep conversation, learning, logical tasks
MultimodalUse text + images (or other media)Visual tasks, creative work

🧮 LLM Pricing = Input Tokens + Output Tokens

LLMs like GPT-4, Claude, or Gemini typically charge per token—not per request. Think of tokens as tiny pieces of words.

  • 1 token ≈ 4 characters (or about ¾ of a word).

  • You're charged for:

    • Input tokens (your prompt)

    • Output tokens (the model’s reply)

If you want to know more about what tokens are or terms of AI like encoder, decoder, transformers, temperature, tokenization etc can read my article.


🎓 Wrapping It Up

Prompting isn't just about talking to AI—it's about talking right. The more clearly and creatively you prompt, the more powerful the results you unlock. Whether you're solving math problems, writing essays, building apps, or just exploring cool tech—you now know how to make the most of your AI assistant.

🚀 Want to Go From Prompting to Building?

If you’re curious to go beyond just using AI and actually want to build with it, I’ve got something awesome for you.

Check out this hands-on course:
👉 Learning GenAI with Python: Concept to Deployment Projects

In this course, you’ll:

  • Learn how LLMs work under the hood 🧠

  • Build real-world GenAI apps with Python and popular tools ⚙️

  • Deploy your own AI-powered projects 🚀

  • And yes—master prompting even more deeply 💬

I have already built these two apps in just 2 sessions of the course.
Tokenizer
Persona Chat

It's beginner-friendly, project-focused, and designed to take you from “this is cool” to “I built that.”

✨ Use my link to support me (at no extra cost to you!) and start your GenAI journey today:
👉 Start Learning Here

0
Subscribe to my newsletter

Read articles from Buddy Coder directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Buddy Coder
Buddy Coder