Prompts Made Simple

Table of contents

In today’s generation, it’s important to learn how to play with AI agents so that you don’t fall behind in this fast-changing AI-driven world.
Every day, people are discovering new ways to use AI for writing, coding, research, creativity, and problem-solving. If you know how to talk to AI effectively through the right prompts, you can save time, spark ideas, and even open up new opportunities.
So, let’s dive into prompting techniques and get an overview of how you can start mastering this powerful skill
Overview ( Prompt Engineering )
It’s strictly not an engineering discipline or a precise science; supported by mathematical and scientific foundations, it’s a practice with a set of guidelines to craft precise, concise, and creative wording of text to instruct an LLM model to carry out a task
It’s an art of communicating with generative large language model
The more concise and precise your prompt, the better the LLM comprehends the task at hand and, hence, the better response it formulates
It follows some guidelines :
Instruction: Describes a specific task you want a model to perform
Context: Additional information or context that can guide the model
Input Data: provides examples or sample inputs for the model to learn the pattern.
Output Format: Tells the model how you’d like the answer to be presented
1. Zero-Shot Prompting
“No examples in your prompt. “
completion = client.chat.completions.create(model="gpt-4",
messages= [
{'role':'user','content':"Hey there"},
])
A single sentence chat is a common zero shot prompt. The LLM has to guess an appropriate response based on limited information. The success of zero shot prompt depends on the model’s pre-trained dataset
2. Few-Shot Prompting
“Provide one or more examples in your prompt.”
# Define the system prompt: this sets the rules for the AI's behavior
system_prompt = """"
You're are an ai agent who's specialized in maths operations.
You should not answer any query that is not related to maths
For a given query help user to solve query with an explationation
Example:
Input : 2 + 2
Output : 2 + 2 is 4 which is calculated by adding 2 numbers
Example:
Input : 3 * 2
Output : 3 * 2 is 6 which is calculated by multiplying 2 numbers. Fun fact you can multiple 2 by 3 which will be same
Example:
Input: Why is sky blue?
Output : Is it really a math related question ?
"""
# Create a chat completion request using OpenAI's API
completion = client.chat.completions.create(model="gpt-4",
temperature= 1,
messages= [
{'role':'system','content':system_prompt},
{'role':'user','content':"What is a mobile phone ?"},
])
print(completion.choices[0].message.content)
In this prompting, the user gives a set of examples that help the LLM understand what is being asked and can help it learn the pattern or tone you want
3. System Prompting
“It sets a context and purpose for the LLM.”
# Define the system prompt: this sets the rules for the AI's behavior
system_prompt = """"
You are an ai agent specialized in solving mathematical problems.
You should only answer queries related to mathematics and politely refuse any question outside this scope
"""
# Create a chat completion request using OpenAI's API
completion = client.chat.completions.create(model="gpt-4", # The model to use
temperature= 1,
messages= [
{'role':'system','content':system_prompt}, # System message defines the agent’s role
{'role':'user','content':"What is a mobile phone ?"}, # User’s question
])
print(completion.choices[0].message.content)
It defines the bigger picture of what the LLM should do and provides specific information to guide it
4. Role Prompting
Asking the AI to “act as” a specific role or persona.
Think of it like telling a friend, “Pretend you’re a chef…” or “Imagine you’re my math teacher…”. The AI then adjusts its behavior to match that role
5. Step-Back Prompting
It’s a two-step prompting technique.
# Step 1: Generate the higher-level abstraction (step back)
step_back_prompt = f"""
You are a teacher. First, create a high-level abstract explanation
of the following question without directly answering it:
Question: {question}
"""
step_back_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": step_back_prompt}]
)
abstract_explanation = step_back_response.choices[0].message.content
# Step 2: Use the abstraction to refine the final answer
final_prompt = f"""
Here is a high-level abstract explanation of the question:
{abstract_explanation}
Now, based on that, provide a clear and detailed final answer
to the original question: {question}
"""
final_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": final_prompt}]
)
print("Abstract reasoning:\n", abstract_explanation)
print("\nFinal Answer:\n", final_response.choices[0].message.content)
First step: You prompt the LLM with a general or high-level question about the task. This helps the model generate a broad perspective, identify key factors, or break down the problem.
Second step: You take that response and feed it back to the LLM with a separate, more specific prompt to produce the final answer.
This method encourages the model to “step back,” reflect on the bigger picture, and then focus on the actual solution.
6. Chain of Thought (CoT) Prompting
“Improves the reasoning capabilities of LLM by generating intermediate reasoning steps.”
# Define the system prompt that instructs the LLM to follow a structured,
# step-by-step reasoning process with JSON output.
system_prompt= """
You're an ai assitant who is expert in breaking down the problems and then resolve
For the given input, analyze the input and break down the problem step by step
Atleast think 5- 6 times on how to solve the problem before solving it down,
The steps you get a user input , you analyze , you think, you again think for several time and then return an output with an explanation and then finally you validate output before giving response
Follow the steps sequentially analyze , think , output , validate , result
Rules:
1. follow json output as per output schema
2. always perform one step at a time and wait for next input
3. carefully analyze the user query
output :
{{"step":"string","content":"string"}}
Example:
Input : What is 2 + 2 ?
output : {{step : "analyze",content: " The user is intrested in maths query and he is asking basic arthmetic question"}}
output : {{step : "think",content: " To perform the addition i must go from left to right and add all the operands"}}
output : {{step : "output",content: " 4"}}
output : {{step : "validate",content: " seems like 4 is correct ans for 2 + 2"}}
output : {{step : "result",content: "2 + 2 is 4, that is calculated by adding all numbers"}}
"""
# Initialize messages with the system prompt
messages = [ {'role':'system','content':system_prompt},]
# Take input from user
query = input('>')
# Start a loop to interact with the model until "result" step is reached
while True:
messages.append({"role":"user","content":query}) # Append the user query to messages
# Call OpenAI's API with chat completion
completion = client.chat.completions.create(model="gpt-4o", # model name
response_format={"type":"json_object"}, # expect structured JSON response
messages= messages)
# Parse the JSON response from the model
parsed_reponse = json.loads(completion.choices[0].message.content)
messages.append({"role":"assistant", "content":json.dumps(parsed_reponse)})
# If current step is not "result", continue until we reach it
if parsed_reponse.get("step")!= "result":
print(f'🧠: ',{parsed_reponse.get("content")})
continue
# If "result" step is reached, print the final output and break loop
print(f'🤖 :',{parsed_reponse.get("content")})
break
With CoT, you are asking the LLM to show its reasoning process step by step
Instead of jumping straight to a result, the model “thinks out loud,” which usually improves accuracy in complex tasks
🚀 Conclusion
Prompting isn’t some strict science. It’s more like a skill you get better at by practicing. The more you try, the more you’ll understand how to talk to AI in a way that gets the results you want.
We looked at different techniques like Zero-Shot, Few-Shot, Role Prompts, System Prompts, Step-Back, and Chain of Thought. Each one has its own purpose, and once you start using them, you’ll notice how much better your AI answers become.
The rule is simple: clearer prompts = better responses.
So don’t overthink it just experiment, play around, and learn as you go. The more you practice, the more natural it will feel.
Thanks for reading! Stay tuned for the next post ….
Subscribe to my newsletter
Read articles from Varad Bhalsing directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
