Different Prompting Styles

Table of contents

What is Prompting?
Prompting, also known as prompt engineering, is the process of providing various inputs (such as text, images, or documents) to an AI model (or large language model, or LLM) to achieve the desired output.
We usually ask an AI model a question, and it gives us an answer. But it can be more efficient if we use the right format. AI models are just a bunch of code with instructions to give the best output. This is where "Prompt Engineering" comes in.
Different AI model in the market follows different prompting structure. Here are some examples:
{
"role": "system",
"content": "some system prompt" // eg. "You are a helpful assistant that answers in bullet points."
},
{
"role": "user",
"content": "some user prompt" // eg. "Explain how solar panels work."
}
<s>
[INST]
<<SYS>>
You are a helpful, concise assistant that answers technical questions. //system prompt
<</SYS>>
How does a binary search tree work? //user prompt
[/INST]
Grok, Gemini kind of follows the same structure as OpenAI
Types of Prompting Techniques
We've seen how different AI models need their inputs to be formatted. When we prompt something, we not only use the right format but also try to get the best answers from the LLMs. To do this, different prompting styles help make the LLMs give the most useful and user-friendly answers:
Direct Answer Prompting
Zero-shot prompting
Few-Shot Prompting
Instruction Prompting
Contextual Prompting
Persona-Based Prompting
Role-Playing Prompting
Chain-of-Thought (CoT) Prompting
Self-Consistency Prompting
Multimodal Prompting
Ok, now let me give a brief description of these with shortcodes.
Direct Answer Prompting:
Direct prompting is giving clear and specific instructions to a model
without including examples
to guide its output. It is like “Just Ask”.import os from google import genai from google.genai import types from dotenv import load_dotenv load_dotenv() GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") client = genai.Client(api_key=GEMINI_API_KEY) direct_prompts = [ "Explain What is direct prompting" ] response = client.models.generate_content( model='gemini-2.0-flash-001', contents=direct_prompts ) print(response.text)
Zero-shot prompting:
It is more like the Direct Prompting, no example is given. But the key difference in here is that Zero-shot Prompting explicitly defines the task to perfrom where in Direct Prompting, the question is asked directly.
import os from google import genai from dotenv import load_dotenv load_dotenv() GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") client = genai.Client(api_key=GEMINI_API_KEY) zero_shot_prompts = [ "Classify this review as positive or negative: 'I absolutely loved this restaurant, the food was amazing!'", "Translate the following English text to French: 'Hello, how are you doing today?'", "Summarize this paragraph in one sentence: 'Artificial intelligence has made significant strides in recent years. Machine learning models can now perform tasks that were once thought to require human intelligence. This has led to breakthroughs in various fields including healthcare, finance, and transportation.'", "Extract the main entities from this sentence: 'Apple CEO Tim Cook announced the new iPhone at their headquarters in Cupertino last Tuesday.'", "Answer this question with yes or no: 'Is the sun larger than the earth?'" ] ## Here, Classify, Translate, Summarize, Extract, Answer are the specifier of the tasks. response = client.models.generate_content( model='gemini-2.0-flash-001', contents=zero_shot_prompts ) print(response.text)
Few-Shot Prompting:
Unlike zero-shot prompting (where you only specify the task), few-shot prompting provides
demonstration Examples
in the prompt itself. The model can then follow the pattern established by these examples when responding to new inputs.import os from google import genai from dotenv import load_dotenv load_dotenv() GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") client = genai.Client(api_key=GEMINI_API_KEY) few_shot_prompts = [ """Classify the sentiment as positive, negative, or neutral: Example 1: Text: "This movie was absolutely terrible." Sentiment: Negative Example 2: Text: "I had a wonderful time at the restaurant." Sentiment: Positive Example 3: Text: "The weather is cloudy today." Sentiment: Neutral Now classify this: Text: "The service was slow but the food was delicious." Sentiment:""", """Translate English to French: English: Hello, how are you? French: Bonjour, comment allez-vous? English: I love artificial intelligence. French: J'aime l'intelligence artificielle. English: What time is the meeting tomorrow? French:""" ] ## Here, along with specifier of the tasks, their expected answer is also given, so that the output can be more directed response = client.models.generate_content( model='gemini-2.0-flash-001', contents=few_shot_prompts ) print(response.text)
Instruction Prompting:
Instruction prompting provides the model with specific guidelines about:
The task to perform
The exact steps to follow
The formatting of the output
Constraints and requirements
Evaluation criteria
Unlike the zero-shot and few-shot prompting, it adds extra criteria, “The exact steps to follow to reach the conclusion”
import os
from google import genai
from dotenv import load_dotenv
load_dotenv()
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
client = genai.Client(api_key=GEMINI_API_KEY)
instruction_prompts = [
"""Write a product description for a wireless headphone. Follow these instructions:
1. Keep it under 100 words
2. Highlight at least 3 key features
3. Include battery life information
4. Target audience is young professionals
5. End with a call to action
6. Do not mention price""",
"""Analyze the following customer feedback and do exactly as instructed:
Feedback: "I've been using your app for 3 months. It's mostly good but crashes sometimes and the dark mode hurts my eyes."
Instructions:
1. Identify all issues mentioned
2. Rate severity of each issue (Low/Medium/High)
3. Suggest one specific solution for each issue
4. Format your response as a table with columns: Issue, Severity, Solution
5. Add a brief conclusion with exactly 2 sentences""",
"""Create a 5-day meal plan following these requirements:
1. Each day must include breakfast, lunch, and dinner
2. All meals must be vegetarian
3. Include calorie count for each meal
4. No meal should repeat during the 5 days
5. Include at least one protein source in each meal
6. Format in a clear, readable structure with days as headings"""
]
## Here, along with specifier of the tasks, their expected answer is also given plus the steps to reach conclusion is also here, so that the output can be more directed
def get_response(prompt):
response = client.models.generate_content(
model = 'gemini-2.0-flash-001',
contents=prompt
)
return f"Prompt: \n{prompt}\n\nResponse:\n{response.text}\n{'='*50}\n"
for prompt in instruction_prompts:
print(get_response(prompt))
Contextual Prompting:
Contexual Prompting is more like Instruction Prompting, but here, the
clear context of a situation
is given. Let me give an example: Question: What is greater? 9.8 or 9.11.
Context 1: General number system: Of course, 9.80 > 9.11
Context 2: Topic List of Books: If you have read any book, then you should notice that 9.8 means the 8th lesson of the chapter 9 and 9.11 means the 11th lesson of the 9th chapter. So, of course, 9.11 is greater!!So, based on the different contexts the definite answer might change, and here Contextual Prompting helps.
import os from google import genai from dotenv import load_dotenv load_dotenv() GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") client = genai.Client(api_key=GEMINI_API_KEY) contextual_prompts = [ """Context: You are reviewing code for a junior developer who is learning Python. They have just submitted their first attempt at writing a function that calculates the factorial of a number. Code: def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) Question: What feedback would you give this developer about their factorial function?""" ] ## Here, background information is provided before asking the question response = client.models.generate_content( model='gemini-2.0-flash-001', contents=contextual_prompts[0] ) print(response.text)
Persona-Based Prompting:
In here, it basically follows the structure of Contextual Prompting, but an extra layer of someone’s tone/role/character/viewpoint is given.
In general, in persona-based prompting, you:
Define a specific role or character for the AI to embody
Specify characteristics, expertise, or background of this persona
Frame questions that the persona should answer from their perspective
Get responses that reflect the knowledge and communication style of that persona
import os
from google import genai
from dotenv import load_dotenv
load_dotenv()
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
client = genai.Client(api_key=GEMINI_API_KEY)
persona_prompts = [
"""Persona: You are a cybersecurity expert with 15 years of experience in network security and ethical hacking. You specialize in explaining complex security concepts in simple terms. Your answer starts with, "Hey There, whatcha? I am here to help. No Worries, 'kay..."
Question: What are the most important steps a small business should take to protect themselves from ransomware attacks?""",
"""Persona: You are a professional chef who specializes in Italian cuisine. You've worked in 5-star restaurants in Rome and have published several cookbooks on authentic Italian cooking. You are hot-tempered and if anybody asks unnecessary question, you boil out.
Question: What's your secret to making the perfect homemade pasta dough?""",
"""Persona: You are a quantum physicist working at a leading research institution. You have a knack for explaining complicated physics concepts to non-scientists. You are a lovely person and a romanticist. You try to seduce female co-workers
Question: How would you explain quantum entanglement to someone with no background in physics?"""
]
## Here, a specific role/character is defined for the AI to adopt when answering
def get_response(prompt):
response = client.models.generate_content(
model = 'gemini-2.0-flash-001',
contents=prompt
)
return f"Prompt: \n{prompt}\n\nResponse:\n{response.text}\n{'='*50}\n"
for prompt in persona_prompts:
print(get_response(prompt))
Role-Playing Prompting:
Role-playing prompting involves placing the AI in a specific scenario and asking it to respond as if it were a character within that scenario. Unlike persona-based prompting (which focuses on expertise and traits), role-playing emphasizes interactive scenarios and situational responses.
In role-playing prompting, you:
Create a specific scenario or situation
Cast the AI in a particular role within that scenario
Often include other characters or elements for interaction
Ask the AI to respond as if the scenario were real
import os
from google import genai
from dotenv import load_dotenv
load_dotenv()
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
client = genai.Client(api_key=GEMINI_API_KEY)
roleplay_prompts = [
"""Role-play: You are a medieval blacksmith in a fantasy kingdom. A young adventurer has entered your shop looking for their first sword but doesn't have much money. They're asking about the different types of weapons you sell.
Respond as the blacksmith would in this scenario.""",
"""Role-play: You are a time traveler from the year 2300 who has just arrived in 2025. You're speaking with someone who is curious about what the future is like. You're trying not to reveal too much to avoid changing the timeline.
How do you respond to their questions about future technology?""",
"""Role-play: You are the captain of a spaceship that has just received a distress signal from a nearby planet known to be dangerous. Your crew is divided on whether to investigate or ignore it. You need to make a decision and explain it to your crew.
What do you say to your crew?"""
]
## Here, the AI is placed in a specific scenario with contextual details
def get_response(prompt):
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=prompt
)
return f"Prompt: \n{prompt}\n\nResponse:\n{response.text}\n{'='*50}\n"
for prompt in roleplay_prompts:
print(get_response(prompt))
Chain-of-Thought (CoT) Prompting:
Chain-of-thought prompting is a technique that encourages the AI to show its reasoning process step-by-step before providing a final answer. This approach is particularly effective for complex problems requiring multi-step reasoning. It is more like Instruction Prompting, but unlike that, in here, the reasoning in each step is built on the reasoning of the previous step(you can see this in some models of OpenAI). Its main goal is to expose the reasoning process.
In Chain-of-Thought prompting, you:
Ask the model to "think step by step" before answering
Encourage showing intermediate reasoning and calculations
Break down complex problems into logical sequences
Follow the reasoning process from start to conclusion
import os
from google import genai
from dotenv import load_dotenv
load_dotenv()
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
client = genai.Client(api_key=GEMINI_API_KEY)
cot_prompts = [
"""Solve this math problem. Think step by step before giving your final answer.
Problem: If a store is selling a shirt for $45 after applying a 25% discount, what was the original price of the shirt?""",
"""Consider this logical puzzle. Think step by step through the reasoning process. Consider each steps reasoning for the next step's base case.
Puzzle: Jack is looking at Anne, and Anne is looking at George. Jack is married, George is unmarried. Is a married person looking at an unmarried person? Explain your reasoning.""",
"""Analyze whether this argument is valid. Think step by step through your analysis.
Argument: All mammals are warm-blooded. All whales are mammals. Therefore, all whales are warm-blooded."""
]
## Here, the AI is explicitly asked to show its reasoning process step by step and follow each step's reasoning
def get_response(prompt):
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=prompt
)
return f"Prompt: \n{prompt}\n\nResponse:\n{response.text}\n{'='*50}\n"
for prompt in cot_prompts:
print(get_response(prompt))
Self-Consistency Prompting and Multimodal Prompting:
I will discuss these topics in another article.
Conclusion:
There are quite a few techniques available in the industry. Each one has its own use case, and depending on the need, on a single application, various methods can be used. But all of these are structured way to get the best of any LLM model out there.
Subscribe to my newsletter
Read articles from Pritom Biswas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
