System prompts

Chaitrali KakdeChaitrali Kakde
6 min read

The new AI world operates on prompting, the way you craft these prompts you give to your Large Language Models (LLMs), absolutely determines output you receive. It's like being an architect of language; your blueprints (prompts) guide the AI in constructing its responses. So, let's dive into the art of giving truly enchanting instructions!

A system prompt is a powerful, initial instruction given to an AI model to define its behavior, persona, and rules for an entire conversation. Unlike a regular "user prompt," which is a single query for a specific task, the system prompt sets the overarching context and constraints that the AI must follow for all subsequent interactions. It's the set of guiding principles that the model will refer back to, no matter what question you ask it.

For example :

You are a travel planning AI assistant for a global tour agency. Your role is to help customers create personalized itineraries, suggest destinations, and provide travel tips. Always keep recommendations budget-friendly, culturally sensitive, and easy to follow.

Diff between user prompt and system prompt

AspectSystem PromptUser Prompt
PurposeEstablishes overarching context, persona, and rules for the conversation.Asks a specific question or requests a specific task.
ScopeApplies to the entire conversation.Applies only to that single request.
DurationPersistent throughout the chat unless changed.Temporary — only affects the current message.
Example"You are a helpful assistant who is an expert in creating concise summaries. Your goal is to simplify complex topics. Keep all responses to a maximum of three sentences.""Summarize the concept of quantum computing."
EffectShapes tone, style, and constraints for all responses.Directly triggers one specific action or answer.

Components of Effective System prompts

  1. Role Definition: Clearly states the AI's role or persona.

  2. Behavioral Guidelines: Outlines how the AI should interact and respond.

  3. Knowledge Boundaries: Specifies the scope of the AI's knowledge or expertise.

  4. Ethical Constraints: Incorporates rules for ethical behavior and content generation.

  5. Interaction Style: Defines the tone, formality, or style of communication.

  6. Task-Specific Instructions: Provides guidance for handling particular types of queries or tasks.

Types of prompting

These two techniques are the foundational building blocks of prompt engineering. They describe how much context and how many examples you provide the LLM with to guide its response.

Zero-Shot Prompting

▪ What it is: The simplest form of prompting. You give the model a task or a question without providing any examples of the desired output. The model relies entirely on its pre-existing knowledge from its vast training data.

▪ How it works: The prompt is a direct instruction. The LLM understands the task based on its general understanding of language and the world, and it generates a response accordingly.

▪ Example:

▪ Prompt: What is the capital of France? ▪Output: The capital of France is Paris.

▪ When to use it: For straightforward, factual questions, simple translations, or tasks that don't require a specific format or tone. It's the go-to for quick and general queries.

Few-Shot Prompting

▪ What it is: You provide the model with a few examples of input-output pairs to demonstrate the pattern, format, or style you want. This helps the model understand the specific context of your request before you give it the final prompt.

▪ How it works: The LLM "learns" from the examples provided within the prompt itself. It identifies the underlying pattern and then applies that pattern to your new, unseen input. This is a form of "in-context learning."

▪ Example:

▪ Prompt:

  1. The movie was great! -> Positive

  2. This is a waste of money. -> Negative

  3. I have no strong feelings about this. -> Neutral

  4. The service was terrible. ->

▪Output: Negative

▪ When to use it: For tasks that require a specific format (e.g., sentiment analysis, classification, summarization) or when the desired output is not immediately obvious to a general-purpose model.

Advanced Reasoning Techniques

These techniques build on the foundation of few-shot prompting to unlock complex reasoning, problem-solving, and decision-making abilities in LLMs.

Chain-of-Thought (CoT) Prompting

▪ What it is: A technique that guides the model to break down a complex problem into a series of logical, intermediate steps. Instead of just asking for the final answer, you prompt the model to "think step by step."

▪ How it works: By adding phrases like "Let's think step by step," you encourage the LLM to generate a transparent reasoning process. This makes the model's work more interpretable and often leads to more accurate results, especially for math word problems and logical puzzles.

▪ Example:

If a baker has 20 cupcakes, sells 10, and then bakes another 5, how many cupcakes does she have now? Let's think step by step.

▪ Output:

The baker starts with 20 cupcakes.

She sells 10, so 20 - 10 = 10 cupcakes.

She bakes 5 more, so 10 + 5 = 15 cupcakes.

The final answer is 15.

▪ When to use it: For any task that requires multi-step reasoning, such as arithmetic, commonsense reasoning, or complex question-answering.

Tree-of-Thought (ToT) Prompting

▪ What it is: An extension of CoT that allows the model to explore multiple reasoning paths simultaneously, much like branches on a tree. It can evaluate these different paths and select the most promising one.

▪ How it works: The model doesn't just follow a single chain of thought. It generates several different ideas or intermediate steps, assesses their potential, and then "prunes" the less promising ones. It can even backtrack and explore a different path if a dead end is reached.

▪When to use it: For highly complex problems that have multiple possible solutions, such as strategic planning, creative writing, or solving logic puzzles like Sudoku. It is a more resource-intensive technique but can lead to superior results for challenging tasks.

3. Other Noteworthy Prompting Techniques

Self-Consistency

▪ What it is: A technique that involves prompting the model multiple times to generate several different reasoning paths (e.g., using CoT) and then selecting the most common answer from all of the outputs.

▪ Why it's useful: It's a way of leveraging the model's own "wisdom of the crowd" to improve the reliability and accuracy of its responses.

Role Prompting

▪ What it is: Instructing the model to adopt a specific persona, such as "Act as a professional editor" or "You are a friendly travel guide."

▪ Why it's useful: It shapes the tone, style, and content of the model's responses to be consistent with the assigned role, making the output more targeted and professional.

Meta-Prompting

▪ What it is: A technique where you prompt the model to reflect on its own process or to generate a better prompt for itself.

▪ Why it's useful: This is a high-level form of prompting where you can have the model assist you in refining your own instructions, leading to more optimal outputs.

37
Subscribe to my newsletter

Read articles from Chaitrali Kakde directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Chaitrali Kakde
Chaitrali Kakde