Prompt Engineering 101

Manmeet SinghManmeet Singh
3 min read

Understanding Why System Prompts Matter

When interacting with an AI model, you usually set the tone and behavior using a system prompt, it's the invisible guide at the start of each conversation. Think of it as greeting the model with: “You are helpful and concise.” That cue influences how the AI replies. A system prompt can define role, tone, format, or even the rules of the game. For example:

You are an assistant that writes short fairy tales for children.
You answer in friendly, imaginative sentences.

Here, you assign role and style, making sure the AI stays on track. Systems like Microsoft’s have used similar prompts to control the output format and tone.


Zero-Shot Prompting

What it is: You directly ask the model to perform a task no examples provided. It relies entirely on its training and your instruction.

When to use: For simple, well-understood tasks (e.g. translations, summarization, classification)

Code example:

def zero_shot(prompt):
    return chat_completion(f"Classify this as positive or negative: {prompt}")

# Usage
result = zero_shot("I am so happy with this product!")

Why it works: If the model has seen the task in training, it can often do it well even without examples.


Few-Shot Prompting

What it is: You provide a few examples along with instructions. The model imitates the pattern.

When to use: For structured tasks or when precise formatting is needed (e.g. JSON, templated outputs).

Code example:

def few_shot(prompt):
    examples = """
Example 1: Convert to passive voice:
The cat chased the mouse. → The mouse was chased by the cat.

Example 2: Convert to passive voice:
The chef cooked the meal. → The meal was cooked by the chef.

Now convert to passive:
""" + prompt
    return chat_completion(examples)

This helps the model mimic exact transformations because it sees examples first.


Chain-of-Thought Prompting

What it is: You ask the model to show its reasoning step by step. This helps it break down complex tasks.

When to use: For multi-step logic, math problems, or complex decision-making.

Code example:

def chain_of_thought(prompt):
    return chat_completion(f"{prompt} Explain your reasoning step by step.")

# Usage
chain_of_thought("If a train travels 60 miles in one hour, how far in 3 hours?")

Why it works: The model gives intermediate reasoning, which often improves accuracy, especially on reasoning benchmarks.


Persona-Based Prompting

What it is: You ask the model to adopt a persona or role like a teacher, scientist, or storyteller. This can influence tone, vocabulary, and style.

When to use: When you want responses from a specific voice or perspective.

Code example:

def persona_response(prompt):
    role = "You are a kind, patient teacher explaining to a child."
    question = "Explain what a rainbow is."
    return chat_completion(f"{role}\n{question}")

Some research shows that adding a persona may help in certain tasks, but results vary. In some studies, persona-based prompts outperformed Chain-of-Thought and few-shot in reasoning tasks, while others found no real improvement in factual performance when adding social roles.


Bringing It All Together

When you prompt well, the model works better. Here is how the pieces fit:

  • System prompt: sets the foundation role, tone, rules.

  • Zero-shot: go direct when tasks are simple and clear.

  • Few-shot: show examples when you need structure or examples.

  • Chain-of-Thought: use for logic or multi-step problems.

  • Persona-based: adopt special voice or style, but test its impact.

0
Subscribe to my newsletter

Read articles from Manmeet Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Manmeet Singh
Manmeet Singh