#2 - Prompt Engineering : A Gentle Introduction


INTRODUCTION
Does crafting AI prompts require real skills? LLMs (not the ChatGPT like AI applications but the models that works behind them) expect queries to be fed to them in a precise, structured, and with context in order to return an output that the end user expects. In this article, I would like to share some extra insights into the world of prompts, get a brief look at the different styles used in prompting, and in that process learn how to write better prompts to be used while interacting with an AI application.
GIGO
In the software world, a vague requirements set vague expectations in the end product. The same applies to AI: a lazy prompt like “Tell me about cloud computing” retrieves garbage results. Therefore we must remember - specificity is king.
Example of a bad prompt:
“Explain What are APIs”
Example of a good prompt:
“Explain REST APIs in 3 bullet points for a junior front-end developer with 2 years of experience using JavaScript.”Writing elaborate prompts which specifically states your requirement provides more context for the LLM to help you get more tailored and accurate results.
Looking at the prompts, it feels like it explains itself.
MODEL SPECIFIC PROMPTING
Just as Python and JavaScript have different syntax, LLMs like Alpaca, Llama, GPT models have their own unique prompt structures. Here are some examples:
1. Alpaca (Stanford) prompt
If you know JSON format, then this should feel similar:
## instruction
Validate this JSON payload against a schema:
## input
{ "user_id": 123, "name": "Alice" }
## response
This style requires a separation of - instructions, inputs, and outputs - to get structured response back.
2. LLaMA-3’s INST Tags based prompt
This is similar to XML format, where tags define boundaries:
<s>
[INST]
Debug this Python code:
def sum(a, b):
return a - b
[/INST]
</s>
Here, an <s>
tag tells the model where the prompt starts and ends, where the query needs to be passed between [INST]
tags.
3. ChatGPT’s ‘ChatML’ prompt
ChatML prompt expects the prompt separated based on context and query. For example;
{ role: "system", content: "You are a security auditor." },
{ role: "user", content: "Scan this JS code for SQL vulnerabilities." }
Here, ‘system’ role can be used to inject context and ‘user’ role is used to send the query to the model.
PROMPTING STYLES
Over the time, different types of prompting styles have evolved with the aim to get better outputs from an LLM. I have listed a few of them below:
Zero-Shot Prompting :
This technique involves asking the AI to perform a task without giving it any prior examples. You simply state your request directly. It relies on the AI's pre-existing knowledge and understanding. It's like asking a question without providing any context from previous answers.Translate the following English sentence to French: 'Hello, how are you?'
One-Shot Prompting :
Here, you provide the AI with just one example of the task you want it to perform. This single example helps guide the AI on the desired output format or style.
Here's an example of converting a sentence to passive voice: Active: The dog chased the ball. Passive: The ball was chased by the dog. Now, convert this sentence: Active: She wrote a letter. Passive:
It gives the AI a simple template to follow for similar requests.
Few-Shot Prompting :
Similar to one-short prompting but with more examples.
Here are some examples of movie titles summarized: Title: Inception, Summary: A thief who enters people's dreams to steal information. Title: The Matrix, Summary: A computer hacker learns the truth about his reality. Now, summarize this movie: 'Title: Interstellar', Summary:
Chain-of-Thought (CoT) Prompting :
This method encourages the AI to break down a complex problem into intermediate steps, showing its reasoning process. By asking the AI to "think step by step," you are essentially guiding it to provide a more accurate and logical conclusion.
Let me provide an elaborate system prompt:
""" You are an AI assistant who is expert in breaking down complex problems and then resolving the user query. For the given user input, analyse the input and break down the problem step by step. At least think 5-6 steps on how to solve the problem before solving it. The steps are: "analyse", "think", "output", "validate", and finally "result". Rules: 1. Follow the strict JSON output schema. 2. Always perform one step at a time and wait for the next input. 3. Carefully analyse the user query. Output Format (strict JSON): { "step": "string", "content": "string" } Example: Input: What is 2 + 2. Output: { "step": "analyse", "content": "The user is interested in a basic arithmetic question: 2 + 2." } Output: { "step": "think", "content": "To perform the addition, I need to add 2 and 2." } Output: { "step": "output", "content": "4" } Output: { "step": "validate", "content": "The result of 2 + 2 is indeed 4." } Output: { "step": "result", "content": "2 + 2 = 4" } """
As you might have already felt, this feels similar to human problem-solving thought process.
Self-Consistency Prompting :
In order to fully understand this technique, let me ask you this - what do you think the output of the following query to an LLM be:
“
When I was 6 years old, my sister was half my age.
Now, I am 70. How old is my sister?
“You’ll be surprised to learn that it returns ‘35’ as the answer! The LLM just divides 70 by half and returns the answer.
So how can we correct this behavior?
That’s where self consistency prompting comes into play.
This technique involves asking the AI to generate multiple reasoning paths or answers to the same question. After generating various solutions, the AI then identifies the most consistent or frequently occurring answer. It helps validate the AI's reasoning. Let’s look at an example:Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 + 2 = 5 cars. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Leah had 32 chocolates and Leah’s sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39.
You can consider it as an advanced version of ‘few shot prompting’ technique which I have described before.
Persona-Based Prompting :
Assigns a specific expertise, trait, or perspective (e.g., “doctor” or “historian”) to align responses with a defined role.
Act as a friendly and enthusiastic travel blogger. Write a short paragraph encouraging people to visit Goa, highlighting its beaches, relaxed atmosphere and vibrant culture.
This is quite useful in creating chatbots.
Role-Play Prompting :
Simulates interactive scenarios where the model adopts a character or conversational role (e.g., “travel agent” or “customer”).
You are a seasoned chef explaining how to make the perfect omelet. I am a beginner cook asking for simple instructions. Start the conversation.
This helps the LLM to set precise context before answering the user queries.
ADVANCED PROMPTING STYLES
There are some other prompting styles but those seemed too far-fetched for the scope of this article and therefore let me just share what they are in a one-line sentence and leave them as a topic to be covered in a later article:
Contextual Prompting: Here, you provide the AI with extensive background information or relevant details before asking your question.
Example**:** "We are launching a new sustainable clothing brand called 'EcoWear' that uses recycled materials and ethical manufacturing. Our target audience is environmentally conscious young adults. Draft a social media post announcing our grand opening, emphasizing our commitment to sustainability."Multi model Prompting: This technique involves giving the AI inputs in more than one format, such as text combined with an image or audio. ChatGPT, for example, now lets the user upload documents for this purpose.
Example**:** "Based on the provided image of a messy room, suggest 3 actionable steps to organize it efficiently." (Implicit: User provides an image of a messy room along with this text prompt.)
CONCLUSION
Prompting is a skill that combines both creativity and technical understanding. It begins with crafting clear and precise prompts, which is essential for guiding the AI to produce the desired output. It's important to test your prompts repeatedly, making adjustments as needed to refine the results. Additionally, each prompt should be customized to fit the specific architecture and capabilities of the AI model you are using. As artificial intelligence continues to advance, prompting techniques will also develop and improve. However, the principle of "Garbage In, Garbage Out" (GIGO) will always be relevant, meaning that the quality of the input directly affects the quality of the output. Therefore, selecting the right prompting style when interacting with a language model is crucial for achieving optimal results.
REFERENCE
Subscribe to my newsletter
Read articles from Mishal Alexander directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Mishal Alexander
Mishal Alexander
I'm passionate about continuous learning, keeping myself up to date with latest changes in the IT field. My interest is in the areas of Web Development (JavaScript/TypeScript), Blockchain and GenAI (focusing on creating and deploying memory aware AI-powered RAG applications using LangGraph, LangFuse, QdrantDB and Neo4J). I welcome professional connections to explore new ideas and collaborations.