Master Prompting


I am following a course ,and got to learn there are mutliple ways of prompting depends on how you want the Ai to answer or how efficient you want it to be , So lets dive deep into it!
Zero-shot Prompting
The first and basic prompting method , The model is given a direct question or task without prior examples, we don’t provide any context or examples for understanding model interprets it directly.
result = client.chat.completions.create(
model="gpt-4",
messages=[
{ "role": "user", "content": "What is greator? 9.8 or 9.11" } # Zero Shot Prompting
]
)
print(result.choices[0].message.content)
you can see the content we provided just question, that’s a zero shot prompting
Few-shot Prompting
Here , as the name suggests we add few more things for model like examples , inputs for better understanding and responses , The model is provided with a few examples before asking it to generate response.
//Few shot Prompting
system_prompt = """
You are an AI Assistant who is specialized in maths.
You should not answer any query that is not related to maths.
For a given query help user to solve that along with explanation.
Example:
Input: 2 + 2
Output: 2 + 2 is 4 which is calculated by adding 2 with 2.
Input: 3 * 10
Output: 3 * 10 is 30 which is calculated by multipling 3 by 10. Funfact you can even multiply 10 * 3 which gives same result.
Input: Why is sky blue?
Output: Bruh? You alright? Is it maths query?
"""
result = client.chat.completions.create(
model="gpt-4",
messages=[
{ "role": "system", "content": system_prompt },
{ "role": "user", "content": "what is a mobile phone?" }
]
)
We can clearly see how we gave system prompt to model for better responses , now we know how the model would response considering our system prompts.
Chain-Of-Though [COT] Prompting
The model is encouraged to break down reasoning step by step before arriving at an answer.
Let’s dive in code for better understanding
// Chain of thought Prompting
system_prompt = """
You are an AI assistant who is expert in breaking down complex problems and then resolve the user query.
For the given user input, analyse the input and break down the problem step by step.
Atleast think 5-6 steps on how to solve the problem before solving it down.
The steps are you get a user input, you analyse, you think, you again think for several times and then return an output with explanation and then finally you validate the output as well before giving final result.
Follow the steps in sequence that is "analyse", "think", "output", "validate" and finally "result".
Rules:
1. Follow the strict JSON output as per Output schema.
2. Always perform one step at a time and wait for next input
3. Carefully analyse the user query
Output Format:
{{ step: "string", content: "string" }}
Example:
Input: What is 2 + 2.
Output: {{ step: "analyse", content: "Alright! The user is intersted in maths query and he is asking a basic arthermatic operation" }}
Output: {{ step: "think", content: "To perform the addition i must go from left to right and add all the operands" }}
Output: {{ step: "output", content: "4" }}
Output: {{ step: "validate", content: "seems like 4 is correct ans for 2 + 2" }}
Output: {{ step: "result", content: "2 + 2 = 4 and that is calculated by adding all numbers" }}
"""
messages = [
{ "role": "system", "content": system_prompt },
]
query = input("> ")
messages.append({ "role": "user", "content": query })
while True:
response = client.chat.completions.create(
model="gpt-4o",
response_format={"type": "json_object"},
messages=messages
)
parsed_response = json.loads(response.choices[0].message.content)
messages.append({ "role": "assistant", "content": json.dumps(parsed_response) })
if parsed_response.get("step") != "output":
print(f"🧠: {parsed_response.get("content")}") //give back the response to model until it gets the file output
continue
print(f"🤖: {parsed_response.get("content")}")
break
Here we can see how we gave a very systematic prompt and we gave back the reponses until it arrives at the final output.
Self-Consistency Prompting
Self-consistency is a follow up to Chain-of-Thought prompting that takes the majority result of multiple model responses to the same prompt. The model generates multiple responses and selects the most consistent or common answer.
def simple_self_consistency(question, num_samples=3):
system_prompt = "Solve this step by step. End with 'ANSWER: [your final answer]'"
answers = []
print(f"Question: {question}")
# Generate multiple reasoning paths
for i in range(num_samples):
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": question}
],
temperature=0.7, # Higher temperature for diversity
)
reasoning = response.choices[0].message.content
# Extract answer using simple string parsing
if "ANSWER:" in reasoning:
answer = reasoning.split("ANSWER:")[-1].strip()
answers.append(answer)
print(f"Sample {i+1} answer: {answers[-1] if answers else 'No answer found'}")
# Find the most common answer
if answers:
consensus = Counter(answers).most_common(1)[0][0]
print(f"\nConsensus answer: {consensus}")
else:
print("No answers found")
# Example
question = "If John has 5 apples and gives 2 away, how many does he have left?"
simple_self_consistency(question)
we can clearly see how we are generating mutile reponse and then we are replying with the most common answer of all the responses, By aggregating multiple responses to the same prompt, self-consistency ensures that the final answer to an input represents a consensus vote, which tends to be more reliable and accurate than individual Chain-of-Thought completions on their own.
Instruction Prompting
The model is instructed to follow particular format or guidelines , lets take an example of prompt for better understanding
PROMPT
A user has input their first and last name into a form. We don't know in which order their first name and last name are, but we need it to be in this format '[Last name], [First name]'. Please convert the following name in the expected format: John Smith
THE OUTPUT : Smith, John
def instruction_prompt(instruction, content):
"""
A simple implementation of instruction prompting
Parameters:
- instruction: Clear directions on what the AI should do
- content: The actual content to process
"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": f"""
# Instructions
{instruction}
# Content
{content}
"""
}
]
)
return response.choices[0].message.content
# Example usage
if __name__ == "__main__":
instruction = """
Summarize the following text in 3 bullet points.
Make sure each bullet point is no longer than 15 words.
Start each bullet point with a dash (-).
"""
content = """
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans.
AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.
The term "artificial intelligence" had previously been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such as "learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.
"""
result = instruction_prompt(instruction, content)
print(result)
Model give response according to how we instruct it.
Persona-based Prompting
The model is instructed to respond as if it were a particular charater or professionals.
consider this take your mentor see how they talk which domain is he in then you can prompt the way he talks and it will response accordingly , resposing like a particular character.
Lets take example of John Wick (Big fan) He has this silent few words talk mostly upto to the point
So if we prompt the model in such a way ,
def persona_prompt(content, persona_description):
system_message = f"""
{persona_description}
Remember to stay completely in character when responding.
"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": content}
]
)
return response.choices[0].message.content
# John Wick persona description
john_wick_persona = """
You are John Wick, legendary hitman known as 'Baba Yaga'.
Respond with extreme brevity. Use short, direct sentences.
Never use more than 3-4 sentences total.
Speak with quiet intensity. Every word matters.
You are a man of focus, commitment, and sheer will.
You don't explain much. You state facts.
You occasionally reference concepts like consequences, rules, and professional courtesy.
"""
# Example usage
questions = [
"What's the best way to learn programming?",
"Can you tell me about climate change?",
"How should I prepare for a job interview?"
]
print("JOHN WICK RESPONSES:\n")
for question in questions:
print(f"Question: {question}")
response = persona_prompt(question, john_wick_persona)
print(f"Response: {response}\n")
See now the model will responed according to the particular character.
Subscribe to my newsletter
Read articles from prathmesh kale directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
