10 Prompting Techniques to Make Your LLM Do What You Want It to Do


Prompting is like giving Instructions to your llm that it has to respond in the given manner.For different use case we have to choose a Prompting style carefully and our work will be efficiently done by the LLM.The way you ask matters just as much as what you ask. Whether you're generating content, solving problems, or building AI products, knowing different prompting styles can drastically improve your results.
1. Zero-Shot Prompting
In this Prompting style you ask the model to perform a task without giving any prior examples.
When to use: For simpler task where model can understand the instruction easily and clearly.
🧠 Prompt:
User: What is the capital of India?
🤖 Response:
Assistant: Delhi
Example code:
Here user will just provide the input query without any context and examples and model will generate the output.
import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv(".env")
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
input_query = input(">")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{'role':'user','content':input_query}],
)
print(response.choices[0].message.content)
2. 🧩 Few-Shot Prompting
Definition: You provide a few user-assistant pairs in the message history so the model learns the desired pattern through example interactions with the user.
import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv(".env")
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
input_query = input(">")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{'role': 'system', 'content': 'You are a helpful math tutor.'},
{'role': 'user', 'content': 'What is 2 + 2?'},
{'role': 'assistant', 'content': '2 + 2 is 4. We get it by adding 2 with 2.'},
{'role': 'user', 'content': 'What is 3 * 3?'},
{'role': 'assistant', 'content': '3 * 3 is 9. We get it by multiplying 3 with 3 or adding 3 three times.'},
# Model now responds in similar style
{'role': 'user', 'content': 'What is 4 - 1?'}
]
)
print(response.choices[0].message.content)
Important: Few-shot means real turn-by-turn examples, not just examples embedded in a single system message (that would be instruction prompting).
3. 🔗 Chain-of-Thought (CoT) Prompting
Definition: You explicitly ask the model to show its step-by-step reasoning before giving the final answer.
Chain-of-Thought (CoT) Prompting is all about:
Encouraging the model to reason step by step before answering.
Making its thinking process explicit.
Avoiding rushing to conclusions.
import os
from openai import OpenAI
from dotenv import load_dotenv
import json
load_dotenv('.env')
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
system_prompt = """
You are an AI assistant and good at breaking down complex problems and then resolve the user query.
For the given input analyse the user input and breakdown the problem step by step
At least think 4 to 5 steps on how to solve the problem before solving it down.
The steps are You get a user input , you analyse, you think , you again think for several times and then return an output with explanation and then finally you validate the output as well before giving the final output.
Follow these steps in sequence that is 'analyse', 'think' , 'output', 'validate' , and finally 'result'
Rules:
1. Follow the strict JSON output as per OUTPUT schems.
2. Always perform 1 step at a time and wait for next input.
3. Carefully analyse the user query
{{ step : string , content: string}}
Example:
Input: what is 2+2.
output: {{ "step":"analyse", "content":"Alright , the user has asked a simple maths query, want to add two numbers that is 2 and 2. }}
output: {{ "step":"think". "content":"To perform the addition i must go from left to right and add all the operands."}}
output: {{ "step":"output", "content":"4"}}
output: {{ "step": "validate" , "content":"seems like 4 is correct . as 2 is added to 2" }}
output: {{ "step":"result", "content":"2 + 2 = 4 and that is calculated by adding all numbers in expression}}
"""
messages = [{"role": "system", "content": system_prompt}]
user_query = input("> ")
messages.append({"role": "user", "content": user_query})
while True:
result = client.chat.completions.create(
model='gpt-4o',
response_format={"type": "json_object"}, // giving it instruction to give response as json
messages=messages,
)
parsed_response = json.loads(result.choices[0].message.content)
messages.append(
{"role": "assistant", "content": result.choices[0].message.content})
if parsed_response.get("step") != "output":
print(f"🧠:{parsed_response.get('content')}")
continue
print(f"🤖:{parsed_response.get('content')}")
break
4. Self-Consistency Prompting
The model generates multiple responses and select the most common and consistent answer.
🧪 How It Works:
Run the same Chain-of-Thought prompt multiple times to generate diverse reasoning paths:
outputs = [] for _ in range(5): response = client.chat.completions.create( model="gpt-4o", messages=[...], # Your CoT messages ) outputs.append(response.choices[0].message.content)
Concatenate all responses and ask the model to analyze and select the most logical one:
voting_prompt = f""" The following are different solutions generated for the same problem: {outputs} Based on logic, correctness, and clarity, which one seems the most accurate and consistent? Please explain your choice. """ final_output = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": voting_prompt} ] )
When to Use:
When answers vary in logic or explanation but look similar
For high-stakes decisions where reasoning quality matters
To leverage the LLM's own reasoning capacity for validation
5. Instruction Prompting
What It Is:
You give explicit, clear instructions to guide the model’s behavior.
Example:
prompt = "Summarize this article in 3 bullet points."
Or inside system message:
{"role": "system", "content": "You are a professional copywriter. Always respond in short, punchy lines."}
When to Use:
Formatting or stylistic control
Giving the model a "job role"
6. Direct Answer Prompting
What It Is:
Tell the model to give only the answer, no explanation.
Example:
{"role": "system", "content": "Only return the final answer. No reasoning."}
{"role": "user", "content": "What is 17 * 3?"}
Response: "51"
When to Use:
APIs
Short-response UIs (like chat widgets)
7. Persona-Based Prompting
What It Is:
You tell the model to behave like a specific character or expert.
Example:
{"role": "system", "content": "You are Sherlock Holmes. Answer like a detective."}
Response: "Elementary, my dear Watson! The clue lies in the footprints."
When to Use:
Character-driven applications
Brand voice enforcement
8. Role-Playing Prompting
What It Is:
Simulate a conversation or scenario where the assistant acts as a role in dialogue.
Example:
{"role": "system", "content": "You are a doctor speaking to a patient. Keep it kind and professional."}
Response: "How are you feeling today? Let’s go over your symptoms together."
When to Use:
Simulations for training/education
Interactive storytelling
By using this we can also create a AI-interviewer.
9. Contextual Prompting
What It Is:
Feed the model relevant past messages or documents to help it answer more accurately.
Example:
messages = [
{'role': 'user', 'content': 'Here’s a document about our company values...'},
{'role': 'user', 'content': 'Now write a mission statement based on that.'}
]
When to Use:
Long-form writing
Task chains that require memory
10. Multimodal Prompting
What It Is:
Send text + image/audio/video to the model and ask questions or issue commands.
Example:
# Send an image of a graph
{"role": "user", "content": "What trend do you see in this chart?", "image": "chart.png"}
When to Use:
Visual QA
Data interpretation
Accessibility tools
🏁 Conclusion
Prompting is like programming, the better your inputs, the better your outputs. Use these 10 styles depending on: Always use real prompts not Ai generated. Real Prompts give better results.
Your goal (short answer, reasoning, simulation)
The format (chat, API, role-play)
The model’s strengths (vision, memory, logic)
Subscribe to my newsletter
Read articles from Hitendra Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Hitendra Singh
Hitendra Singh
I am a Full Stack Developer