Prompting Techniques: Your Toolkit for Mastering AI Communication


Quick Recap: In Part 1, we looked at how LLMs create responses using settings like temperature, Top-K, and Top-P. Now, we're going to explore the details of prompting techniques that make your prompts work effectively. Ready to improve your skills?
Part 2: Fundamentals of prompting techniques - that separates good prompts from great ones.
Remember this picture from Part 1?
"Well, I'm good with English. So does that mean I'm automatically good at prompt engineering?"
Let's find out, shall we?
Here's the thing - being fluent in English is like having a good voice. But knowing prompting techniques? That's like learning how to sing professionally. Both are English, but the results are very different.
To get more clarity, Let’s assume a situation:
Suppose you are asking someone "Can you help me?"
or
You can say "I need guidance on English Project which is on X topic, I have Y number of information on this topic, I need you to guide me on Z ."
See the difference? Same language, completely different outcomes.
Therefore, it's not just about your English skills, but also about how you frame your question to get the best possible result.
Now let's take a deep dive into Prompting techniques.
The Foundation: Building blocks of good Prompt
Before we jump into specific techniques, let's break down what makes a prompt effective. Every great prompt has these core components:
Context → What's the situation / story behind the question ?
Task → What do you want the AI to do?
Format → How should the output look?
Constraints → What are the rules/limitations?
Examples → What does good look like?
Think of it like giving directions to a friend. You wouldn't just say "Go there."
You'd say "Take the main road, turn left at the coffee shop, look for the blue building, and park in the back."
Technique #1: Zero-Shot Prompting
The name “Zero-Shot” stands for ’no examples’.
This prompt is the simplest type of prompt. It only provides a description of a task and some text for the LLM to get started with - giving the AI a task without any examples.
The model temperature should be set to a low number, since no creativity is needed,
When to use: Simple, straightforward tasks where the instructions are clear.
# System prompt to define the task clearly
system_prompt = """
You are a movie review classifier. Your task is to classify movie reviews into exactly one of these three categories:
- POSITIVE: Reviews that express overall satisfaction, praise, or recommendation
- NEUTRAL: Reviews that are balanced, mixed, or indifferent
- NEGATIVE: Reviews that express overall dissatisfaction, criticism, or disappointment
Respond with only the classification label: POSITIVE, NEUTRAL, or NEGATIVE.
"""
# User prompt with the specific review
user_prompt = f"""
Classify movie reviews as POSITIVE, NEUTRAL or NEGATIVE.
Review: "{review_text}"
Sentiment:
"""
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
temperature=0.1,
max_tokens=5,
top_p=1.0,
frequency_penalty=0,
presence_penalty=0
)
result= response.choices[0].message.content.strip()
return result
except Exception as e:
print(f"Error occurred: {e}")
return "ERROR"
Expected Output: POSITIVE
The AI knows exactly what you want, how long it should be, what tone to use, and what to include.
Even in zero-shot, be as specific as possible - clear, direct, and detailed.
When zero-shot doesn’t work, you can provide demonstrations or examples in the prompt, which leads to “one-shot” and “few-shot” prompting.
Technique #2: Few-Shot Prompting
Give the AI examples of what you want, then ask for more of the same. Examples are especially useful when you want to steer the model to a certain output structure or pattern.
A one-shot prompt, provides a single example, hence the name one-shot. The idea is the model has an example it can imitate to best complete the task.
A few-shot prompt , provides multiple examples to the model. This approach shows the model a pattern that it needs to follow. The idea is similar to one-shot, but multiple examples of the desired pattern increases the chance the model follows the pattern.
When to use: When you have a specific format, style, or pattern you want to replicate.
# System prompt to define the task clearly
system_prompt = """
You are a pizza order parser. Your task is to parse customer pizza orders into valid JSON format with this exact structure:
- "size": can be "small", "medium", "large", or "extra large"
- "type": can be "normal" (single topping set) or "half-half" (two different topping sets)
- "ingredients": array of arrays containing ingredients for each half
For normal pizzas: ingredients should be a single array inside the main array
For half-half pizzas: ingredients should contain two separate arrays for each half
Respond with only valid JSON in the specified format.
"""
# User prompt with examples and the specific order
user_prompt = f"""
Parse a customer's pizza order into valid JSON:
EXAMPLE:
I want a small pizza with cheese, tomato sauce, and pepperoni.
JSON Response:
{{ "size": "small", "type": "normal", "ingredients": [["cheese", "tomato sauce", "pepperoni"]] }}
EXAMPLE:
Can I get a large pizza with tomato sauce, basil and mozzarella
{{
"size": "large",
"type": "normal",
"ingredients": [["tomato sauce", "basil", "mozzarella"]]
}}
Now parse this order:
{order_text}
JSON Response:
"""
try:
# Make the API call with settings matching the original configuration
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
temperature=0.1,
max_tokens=250,
top_p=1.0,
)
# Extract and return the parsed JSON
parsed_order = response.choices[0].message.content.strip()
return parsed_order
except Exception as e:
print(f"Error occurred: {e}")
return "ERROR"
Input:
"Now, I would like a large pizza, with the first half cheese and mozzarella.
And the other tomato sauce, ham and pineapple."
Expected Output:
{
"size": "large",
"type": "half-half",
"ingredients": [["cheese", "mozzarella"], ["tomato sauce", "ham", "pineapple"]]
}
When you choose examples for your prompt, use examples that are relevant to the task you want to perform. The examples should be diverse, of high quality, and well written. One small mistake can confuse the model and will result in undesired output.
Technique #3: Chain-of-Thought (CoT) Prompting
Chain of Thought prompting is a technique for improving the reasoning capabilities of LLMs by generating intermediate reasoning steps. This helps the LLM generate more accurate answers.
You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding as it’s a challenge with a zero-shot chain of thought.
user_prompt= """
When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old.
How old is my partner?
"""
Output : 63 years old
Yikes. That’s obviously the wrong answer. As a matter of fact, LLMs often struggle with mathematical tasks and can provide incorrect answers – even for a task as simple as multiplying two numbers. This is because they are trained on large volumes of text and math may require a different approach.
This is where you combine CoT with few-shot prompting to get better results on more complex tasks that require reasoning before responding as it’s a challenge with a zero-shot chain of thought.
Now let’s see if intermediate reasoning steps will improve the output.
Example of ‘zero-shot’ Chain of thought.
Prompt :
""" When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old.
How old is my partner? Let's think step by step. """
Output :
"""
1. Let’s assume my current age is “x” years.
2. When I was 3 years old, my age was 3 years.
3. My partner’s age at that time was 3 times my age, which means my partner’s
age was 3 * 3 = 9 years.
4. Now, I am 20 years old, which means my age has increased by 20 - 3 = 17
years since I was 3 years old.
5. Since my partner’s age was 9 years when I was 3 years old, and my age has
increased by 17 years, my partner’s age must have also increased by 17 years.
6. Therefore, my partner’s current age is 9 + 17 = 26 years.
So, my partner is 26 years old.
"""
Now the final answer is correct. That’s because we clearly instructed the LLM to explain each step instead of just returning an answer.
Now let’s help the model to think a little bit more by combining it with few shot approach.
Prompt :
"""
Q: When my brother was 2 years old, I was double his age. Now
I am 40 years old. How old is my brother? Let's think step
by step.
A: When my brother was 2 years, I was 2 * 2 = 4 years old.
That's an age difference of 2 years and I am older. Now I am 40
years old, so my brother is 40 - 2 = 38 years old. The answer
is 38.
Q: When I was 3 years old, my partner was 3 times my age. Now,
I am 20 years old. How old is my partner?
Let's think step
by step.
A:
"""
Output :
""" When I was 3 years old, my partner was 3 * 3 = 9 years old. That’s an age
difference of 6 years and my partner is older. Now I am 20 years old, so my
partner is 20 + 6 = 26 years old. The answer is 26.
"""
Chain of thought can be useful for various use-cases. Think of code generation, for breaking down the request into a few steps, and mapping those to specific lines of code. Or for creating synthetic data when you have some kind of seed like “The product is called XYZ, write a description guiding the model through the assumptions you would make based on the product given title.” Generally, any task that can be solved by ‘talking through is a good candidate for a chain of thought. If you can explain the steps to solve the problem, try chain of thought.
With this process, you get both the answer AND the reasoning, making it easier to verify accuracy and learn from the process.
Technique #4: Role-Based Prompting
Role prompting is a technique in prompt engineering that involves assigning a specific role to the gen AI model. This can help the model to generate more relevant and informative output, as the model can craft its responses to the specific role that it has been assigned.
Prompt : """ I want you to act as a travel guide. I will write to you about
my location and you will suggest 3 places to visit near me in
a humorous style.
My suggestion: "I am in Manhattan."
Travel Suggestions:
"""
Output : """
1. Behold the Empire State of Mind: Ascend to the dizzying heights of the Empire
State Building and bask in the glory of Manhattan's skyline. Prepare to feel like
King Kong atop the Big Apple, minus the giant ape-sized banana.
2. Get Artsy-Fartsy at MoMA: Unleash your inner art aficionado at the Museum
of Modern Art (MoMA). Gaze upon masterpieces that will boggle your
mind and make you question whether your stick-figure drawings have any
artistic merit.
3. Shop ‘Til You Drop on Fifth Avenue: Indulge in some retail therapy on the
iconic Fifth Avenue. Brace yourself for sticker shock as you window-shop (or
actually shop) at designer boutiques that will make your wallet cry. But hey,
"""
Here are some styles you can choose from which I find effective: Confrontational, Descriptive, Direct, Formal, Humorous, Influential, Informal, Inspirational, Persuasive.
In this approach, AI draws from relevant knowledge patterns and adopts the thinking style of that profession.
Technique #5: Step-Back Prompting
Step-back prompting is a technique where instead of directly asking the question, first ask a broader or more general question to activate relevant background context. Then, we use that output as context to craft the final prompt. This extra “step back” gives the model more context and insight, ultimately leading to higher-quality, more accurate responses.
This approach helps the model to think more critically and more importantly it helps the model to avoid any of biasness / toxicity. It’s particularly useful for improving performance on complex or abstract prompts, where diving in directly might result in generic or shallow output.
Making the model contextually align with your query is the main goal here.
Prompt : """ Based on popular first-person shooter action games, what are
5 fictional key settings that contribute to a challenging and
engaging level storyline in a first-person shooter video game?
"""
Output : """
1. **Abandoned Military Base**: A post-apocalyptic zone crawling with rogue robots and mutated enemies.
2. **Cyberpunk City**: A neon-lit urban maze filled with cybernetic foes and hacking puzzles.
3. **Alien Spaceship**: A stranded vessel featuring zero gravity combat and alien terror.
4. **Zombie-Infested Town**: Hordes of undead in a decaying town with tight quarters and puzzle challenges.
5. **Underwater Research Facility**: A sunken lab teeming with aquatic monsters and stealth missions.
"""
Now, instead of jumping into the storyline directly, we include this rich context from the broader step-back question:
Prompt : """Context: 5 engaging themes for a first-person shooter video game:
1. Abandoned Military Base
2. Cyberpunk City
3. Alien Spaceship
4. Zombie-Infested Town
5. Underwater Research Facility
Take one of the themes and write a one paragraph storyline for a new level of a first-person shooter video game that is challenging and engaging.
"""
Output : """
In the abyss of the Atlantic, an underwater research facility crumbles beneath the weight of forgotten ambition. The player, a battle-hardened marine armed with deep-sea weaponry and razor-sharp instincts, must infiltrate the station to recover lost data—before mutated sea creatures claim it first. With claustrophobic tunnels, power outages, and lurking horrors, survival demands more than just firepower—it demands wit, timing, and a heart unshaken by the deep’s darkest secrets.
"""
By asking a general question, the model got aligned with relevant concepts and scenarios, making the final result much concise and more aligned with expectations. Step-back prompting leverages the model’s internal knowledge graph more efficiently, and even mitigates bias or irrelevant outputs by anchoring it in core principles.
It’s like you are letting your friend know your break up story before he comment / suggest any stupid idea.
To put simply, You are building the contextual frame (mental frame) before the model respond to your query.
Technique #6: Re-Act Prompting (Reason + Act)
Re-Act (short for Reason and Act) is a powerful prompting framework that allows a language model to alternate between thinking and doing—just like humans do when they face unfamiliar tasks.
This technique break-downs the process of solving a task into few steps.
Those steps are :
Steps | Description of the Process |
Thought | What the model is thinking or trying to figure out? |
Action | What action the model is taking: Search[query], Database[query], or Calculate[Problem Statement] |
Observation | What did the model get from performing that action? |
This loop of Thought → Action → Observation → Thought
allows the model to gather fresh information from the outside world and update its internal decision-making process dynamically. It’s particularly useful in agent-based tasks, data retrieval, or solving problems with multiple steps and dependencies.
Prompt : """
You are a helpful AI assistant with the ability to find reason logically and use tools like
web search, calculator, or database to find accurate and up-to-date answers.
Please answer the following query step-by-step while following the format given below.
Use the following structure repeatedly until you find the final answer:
Thought: [What are you thinking or trying to figure out?]
Action: [What action are you taking: Search[query], Lookup[keyword], or Calculate[expression]]
Observation: [What did you get from performing the action?]
Once you are confident in the answer, respond with:
Final Answer: [The answer to the user's original question]
---
Question: "Who won the FIFA World Cup in 2022 and what country will host the next tournament?"
Begin:
"""
Output : """
Question: "Who won the FIFA World Cup in 2022 and what country will host the next tournament?"
Thought: I need to find out which country won the FIFA World Cup in 2022, and also determine
which country will host the next (2026) World Cup. I will search for both pieces of
information.
Action: Search["2022 FIFA World Cup winner"]
Observation: Argentina won the 2022 FIFA World Cup, defeating France in the final match.
Thought: Next, I need to find out which country is hosting the 2026 FIFA World Cup.
Action: Search["2026 FIFA World Cup host country"]
Observation: The 2026 FIFA World Cup will be jointly hosted by the United States, Canada, and
Mexico.
Final Answer: Argentina won the 2022 FIFA World Cup. The 2026 FIFA World Cup will be jointly
hosted by the United States, Canada, and Mexico.
"""
In this process of prompting gives the model, the ability to self-reflect, fetch external knowledge, and update its path dynamically—making it an ideal approach for real-world applications like virtual agents, data scraping, or tools-based reasoning.
And trust me that’s not all there are lots of other prompting methods as well.
I’ll mention some of them below:
Trees of Thought (ToT)
Contextual Prompting
Automatic Prompt Engineering
Self-consistency ….. etc.
Incase you want to get more details on these techniques, I have mentioned sources
Best Practices
The most important best practice is to provide (one shot / few shot) examples within a prompt. This is highly effective because it acts as a powerful teaching tool.
Prompts should be concise, clear, and easy to understand for both you and the model. As a rule of thumb, if it’s already confusing for you it will likely be also confusing for the model. Try not to use complex language and don’t provide unnecessary information.
Be specific about the desired output. A concise instruction might not guide the LLM enough or could be too generic.
Use Instructions over Constraints.
An instruction provides explicit instructions on the desired format, style, or content of the response. It guides the model on what the model should do or produce.
A constraint is a set of limitations or boundaries on the response. It limits what the model should not do or avoid.
Just like human , LLM also prefers positive instruction over constraints means what not to do. Because it leaves the model guessing what else is allowed and it may result in hallucination.
Try using verbs that describe the action. Here’s a set of examples: Act, Analyze, Categorize, Classify, Contrast, Compare, Create, Describe, Define, Evaluate, Extract, Find, Generate, Identify, List, Measure, Organize, Parse, Pick, Predict, Provide, Rank, Recommend, Return, Retrieve, Rewrite, Select, Show, Sort, Summarize, Translate, Write.
Conclusion
Prompt Engineering is an evolving field. And it will take sometime for us to get the hold of it.
Therefore, Keep experimenting with your prompt and another important suggestion, I received is that you should document your ‘Prompt History‘ means your question and the response you got from the model.
Keeping track of your prompt helps you to refine your prompts and tweak the model response into a desired one.
Here is one demo template :
That's all for this article, and this marks the end of my Prompt Engineering Series. In case you missed it, there is another article on this topic - Part One.
This is Part Two of the series, where I have tried to put together all my learnings on prompt engineering.
Here are some sources of my knowledge:
Subscribe to my newsletter
Read articles from Pritam Chakroborty directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
