๐Ÿค– What if AI Took an Indian Exam? โ€” Mastering Prompt Engineering with Python

Siddharth SoniSiddharth Soni
11 min read

Imagine this:

Itโ€™s 6 AM. A sleepy teenager in Kanpur is sipping chai, staring at a JEE Physics question with 4 options. Meanwhile, in a parallel universe, an AI model is also preparing for the same exam. But there's a twist โ€”
the AI doesnโ€™t study. It doesnโ€™t revise. It just waits for the right prompt.

In this parallel universe, prompting is the question paper, the instruction, the coaching notes โ€” everything rolled into one.

If you ask the AI:
๐Ÿง‘โ€๐ŸŽ“ โ€œWhat is Newtonโ€™s Second Law?โ€ โ€” Itโ€™ll try to answer.

But if you say:
๐Ÿ‘จโ€๐Ÿซ โ€œYou are a Physics teacher. Explain Newtonโ€™s Second Law to a 12-year-old using cricket.โ€ โ€”
Now youโ€™ve given it context, tone, and task. Thatโ€™s a prompt.

๐Ÿงพ In Simple Terms:

Prompting is how you talk to an AI to get the best possible answer โ€” like how a student needs the right coaching, examples, and strategy to crack a paper.

And just like Indian students use:

  • Tuitions (Few Shot)

  • Self-study & Trial Papers (Chain of Thought)

  • Mock Tests (Self-Consistency)

AI also has different prompting styles to perform better.

Today, weโ€™ll explore 5 powerful Prompt Engineering techniques using Python and OpenAI API, explained with fun, Indian-exam-style examples:

  1. ๐Ÿฅถ Zero Shot Prompting

  2. ๐ŸŽ“ Few Shot Prompting

  3. ๐Ÿช„ Chain of Thought Prompting

  4. ๐Ÿ‘จโ€๐Ÿซ Persona Prompting

  5. ๐Ÿ” Self-Consistency Prompting

Letโ€™s explore them one by one, through fun, relatable code examples in Python, as if our AI friend is preparing for an Indian exam.

Bonus: All examples use OpenAI's API securely via .env file ๐Ÿ”

๐Ÿ” Setting Up API Key from .env

OPENAI_API_KEY=your_api_key_here
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()
client = OpenAI()

๐ŸŽฏ But Wait... What is system_prompt and Why Does It Matter?

Before we throw our AI friend into the viva, we need to set the mood.

Think of the system_prompt as the โ€œexam instructionsโ€ written at the top of the question paper โ€” the tone, the rules, the scope.

๐Ÿ“˜ For example:

"Answer all questions in English. Attempt any 5. Donโ€™t write anything outside the syllabus."

Similarly, when using AI (like GPT), the system_prompt sets the persona, tone, and boundaries for the conversation. It tells the AI:

  • Who it is (e.g., a Python expert),

  • How to behave (e.g., roast if off-topic),

  • What not to answer (e.g., donโ€™t help with cooking tea).

๐Ÿ”ง Why Is It Important?

Because without it, AI is like a confused student in an open exam hall โ€” unsure whether to write an essay, solve a math problem, or start singing bhajans ๐Ÿ˜….

So, to make it behave like a strict coding teacher, we write this in the system_prompt:

SYSTEM_PROMPT = """
You are an AI expert in Coding. You only know Python. Roast users if they ask anything 
non-Python.
"""

Now the AI becomes:

Mr. Soni Sir, the Python Master, who has zero chill for nonsense.

And then we give it questions using user prompts โ€” no help, no hints โ€” just like a Zero-Shot viva.

๐Ÿฅถ 1. Zero-Shot Prompting โ†’ "When Youโ€™re Thrown Into Viva With No Prep"

You're expected to answer without examples. Just like that first viva in college, where the teacher asked, "What is polymorphism?"

๐Ÿ” Example:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

SYSTEM_PROMPT = """
You are an AI expert in Coding. You only know Python.
Roast users if they ask anything non-Python.
"""

response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "How to make tea without milk?"},
        {"role": "user", "content": "How to add two numbers in Python?"}
    ]
)

print(response.choices[0].message.content)

โœ… Output:

Buddy, this is not MasterChef. Ask Python questions only!

๐ŸŽ“ 2. Few-Shot Prompting โ†’ "Like Our Tuition Notes โ€“ Learn from Examples"

You give the AI 2-3 examples. It learns the vibe, pattern, and tone.

Just like we prepare from tuition notes before exams โ€” ek do example dekh lo, fir waise hi likh do.

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

SYSTEM_PROMPT = """
You are an AI that only answers Python-related questions.
Roast or ignore any question that is not about Python.

Examples:
User: How to make chai?
Assistant: Arre bhai! Focus on code, not caffeine.

User: How to define a Python function?
Assistant: def my_func(x): return x
"""

response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "Why attendance is 75% compulsory?"}
    ]
)

print(response.choices[0].message.content)

๐Ÿ’ก Output:

Bhai, yeh Python class hai, not college ke attendance ka debate club. Stick to code!

๐Ÿง  Why This Works:

Because the system_prompt showed it how to behave with off-topic questions, and now the model copies that vibe โ€” just like hum tuition se answers ratte hain.

Want me to write similar examples for Chain-of-Thought or Self-Consistency too?

๐Ÿช„ 3. Chain of Thought (CoT) โ†’ โ€œSolving Maths Step-by-Step Like Class 10 Board Examโ€

๐Ÿ“˜ Concept:

CoT Prompting means:

Ask the AI to "show its steps" just like how we solve maths or reasoning problems in school.

Instead of jumping to the final answer, the AI:

  • Thinks out loud

  • Breaks the problem into small parts

  • Solves one part at a time
    Exactly like what teachers ask in Indian exams โ€” "Steps dikhao warna marks cut honge!"

๐Ÿ’ก Why CoT?

BenefitDescription
๐Ÿ‘๏ธ TransparencyYou see how the AI reached the answer
๐Ÿง  Better ReasoningEncourages logical step-by-step flow
โœ… Fewer MistakesCan debug wrong outputs easily
๐Ÿค– Great for Math, Logic, QAPerfect for multi-step reasoning tasks

๐Ÿงช Example: Let's Solve a Logic Puzzle

๐Ÿงพ Task:
If Ramesh is older than Suresh, and Suresh is older than Mukesh, who is the oldest?

๐Ÿ› ๏ธ SYSTEM PROMPT (Instruction to AI)

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

SYSTEM_PROMPT = """
Break down user input using the steps: 
1. Analyse 
2. Think 
3. Output each deduction 
4. Validate 
5. Final Answer

Output Format: 
[
  { "step": "Analyse", "content": "..." },
  { "step": "Think", "content": "..." },
  { "step": "Output", "content": "..." },
  { "step": "Validate", "content": "..." },
  { "step": "Result", "content": "..." }
]
"""

โœ… AIโ€™s RESPONSE (Output)

[
  {
    "step": "Analyse",
    "content": "We have two comparisons: (1) Ramesh > Suresh, (2) Suresh > Mukesh."
  },
  {
    "step": "Think",
    "content": "If Ramesh is older than Suresh, and Suresh is older than Mukesh, then Ramesh must also be older than Mukesh."
  },
  {
    "step": "Output",
    "content": "This forms a linear order: Ramesh > Suresh > Mukesh."
  },
  {
    "step": "Validate",
    "content": "No contradictory statements. All comparisons align logically."
  },
  {
    "step": "Result",
    "content": "Ramesh is the oldest."
  }
]

๐Ÿง  Why This Works:

This gives your AI a mental structure โ€” just like weโ€™re told to follow a pattern in answers during school:

  • Donโ€™t jump to the answer

  • Break it down

  • Justify it

  • Then write the result

This not only boosts output quality but also makes your GenAI app feel intelligent and reliable.

๐Ÿงโ€โ™‚๏ธ 4. Persona Prompting โ†’ "Roleplay as an Indian Board Exam Student โ€” Nervous but Prepared!

What is Persona Prompting?

Persona Prompting is about making the AI behave like a specific character or role. You define that role in the system prompt, and it changes the way the AI responds.

๐Ÿ“Œ Think of it like this: You're not just giving the AI a question โ€” you're giving it a costume and a personality.

๐ŸŽฏ Real-World Example: AI as a 12th Grade CBSE Student Taking an Indian Exam

Imagine the AI is roleplaying as a CBSE Class 12 student who's preparing for the board exams and explaining Python answers in a slightly nervous yet confident student tone.

๐Ÿ› ๏ธ SYSTEM PROMPT:

from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()
client = OpenAI()

SYSTEM_PROMPT = """
You are a Class 12 CBSE student preparing for your Python board exam.
Answer every question like a sincere Indian student who's slightly nervous but 
well-prepared.
Use simple words and give relatable examples where possible.
Be humble and end answers with polite phrases like 'I hope this is correct' or 
'I tried my best to explain'.
"""

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "Explain what is dictionary in Python with an example."}
    ],
    temperature=0.7
)

print(response.choices[0].message.content)

๐Ÿงพ Sample Output:

Umm... Okay, so in Python, a dictionary is like a box where we store values using names 
instead of numbers. We call these names "keys" and the values stored with them are "values".

For example:
```python
student = {"name": "Aman", "class": 12, "marks": 92}

temperature In prompting, it defines how creative or deterministic the AI's responses will be.

๐Ÿ”ฅ What does temperature=0.7 it mean?

  • Range: 0 (most deterministic) to 1 (most random/creative)

  • 0.7 means a balanced output โ€” a mix of logic and creativity.

๐Ÿ” 5. Self-Consistency Prompting โ†’ "When 3 Friends Give 3 Different Answers & You Choose the Smartest One"

๐Ÿง  Concept:

Imagine you're unsure about a tricky Python output question. You ask 3 friends and get different answers. Instead of relying on the first reply, you compare all and trust the most repeated or logically correct one.

That's Self-Consistency โ€” running the same prompt multiple times and selecting the majority or best answer.

๐Ÿ“˜ Example Prompt:

"You're a CBSE class 12 student. Answer this Python question:
What will be the output of the following code?

x = [1, 2, 3]  
y = x  
y.append(4)  
print(x)

๐Ÿง‘โ€๐Ÿ’ป Code (Running the prompt 3 times):

from dotenv import load_dotenv
from openai import OpenAI
import os

load_dotenv()
client = OpenAI()

prompt = """
You're a CBSE class 12 student. Answer this Python question clearly with output and explanation:

What will be the output of the following code?
```python
x = [1, 2, 3]
y = x
y.append(4)
print(x)
"""
responses = []
for i in range(3):
 response = client.chat.completions.create(
     model="gpt-4",
     messages=[
         {"role": "system", "content": "You are a helpful and clear Python tutor for class 12 students."},
         {"role": "user", "content": prompt}
     ],
 temperature=1.2 # add some randomness
)
responses.append(response.choices[0].message.content.strip())

for idx, res in enumerate(responses):
print(f"\n๐Ÿ” Response {idx+1}:\n{res}")

โœ… Sample Outputs:

๐Ÿ” Response 1:

Output: [1, 2, 3, 4]

Explanation: x and y refer to the same list in memory. So changes to y also affect x. The append(4) adds 4 to the original list.

๐Ÿ” Response 2:

Output: [1, 2, 3, 4]

Explanation: The list 'y' is not a copy of 'x'; it's another reference to the same object. So when you append to 'y', 'x' is also updated.

๐Ÿ” Response 3:

Output: [1, 2, 3, 4]

Explanation: Lists are mutable in Python. Assigning 'y = x' means both variables point to the same list. So appending to 'y' changes 'x' too.

๐Ÿง  Final Step (Manual or Logical Voting):

Even though we generated 3 responses using temperature randomness:

  • All 3 answers are logically the same.

  • The output [1, 2, 3, 4] remains consistent.

  • This builds confidence in the result, even when thereโ€™s randomness involved.

You can modify the temperature to 1.5 or 1.8 for even more variance.

๐Ÿงพ Summary:

AttemptOutputExplanation Same?
1[1, 2, 3, 4]โœ…
2[1, 2, 3, 4]โœ…
3[1, 2, 3, 4]โœ…

Note: Youโ€™ll have to run the prompt 3-5 times and select the majority answer manually or via logic.

๐Ÿ“Œ At Last โ€” What is Self-Consistency Prompting?

  1. ๐Ÿ” It means generating multiple answers for the same prompt using randomness.

  2. โœ… Then we compare responses and choose the most logical or majority one.

  3. ๐ŸŽฏ It's helpful in reasoning, exam-type questions, or coding explanations where accuracy matters.

Apart from these, here are a few more popular prompting techniques you can explore:

  • ๐ŸŒ ReAct Prompting

  • ๐Ÿงฉ Tree of Thoughts

  • ๐Ÿงช Retrieval-Augmented Generation (RAG)

  • ๐Ÿง  Directional Stimulus Prompting

  • ๐ŸŽญ Role Prompting

  • ๐Ÿ”€ Multimodal Prompting

๐–กŽ Final Thoughts: AI is Smart, But Smart Prompts Make it Smarter

Prompting is like talking to your over-smart friend โ€” the way you ask decides the answer you get.

๐ŸŽ Wrap-Up Bonus: Cheat Sheet Table

Prompt StyleAnalogyUse Case
Zero-ShotViva Without PrepDirect Q&A
Few-ShotTuition NotesPattern Mimicking
Chain of ThoughtClass 10 Step-by-StepMath, Reasoning, Debugging
Persona PromptingRoleplay as Exam StudentTone & Character Control
Self-Consistency3 Friends, Pick the TopperBest Answer via Variants

๐Ÿš€ Want More?

Iโ€™m writing a full beginner-to-advanced series on GenAI with Python with desi relatable twists! ๐Ÿ›บ

๐Ÿ”— Read More on Medium
๐Ÿ”— Read My Hashnode Article

๐Ÿ’ก If this article added any value to your journey, drop a โค๏ธ and share with your techie friends!

4
Subscribe to my newsletter

Read articles from Siddharth Soni directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Siddharth Soni
Siddharth Soni