Prompt Engineering Basics

To maximize the effectiveness of your large language models, it’s crucial to thoughtfully craft your prompts. By focusing on conciseness, structure, and context, and incorporating as many relevant details as possible, you can enhance the accuracy of the LLMs’ outputs.
The most common LLM prompt types include:
Questions
Responses
Statements
Detailed instructions
Here is an example of a prompt:
Create a simple and concise summary with tldr and bullet points to understand the given context.
Prompt Engineering Techniques: How to Write Good LLM Prompts
Writing good LLM prompts requires you to:
1. Be specific and clear
“Summarize the key causes and consequences of World War I in under 150 words.”
2. Structure the prompts
“Act as a history teacher. First, list three major events that led to the fall of the Roman Empire. Then, briefly explain each in 2-3 sentences.”
3. Provide context when possible
“I’m preparing a 5-minute speech for high school students about climate change. Can you provide a simple explanation of the greenhouse effect suitable for that audience?”
4. Ask open-ended questions for an explanation
“Why do some economists argue against raising the minimum wage? Explain both sides of the debate.”
5. Ask for examples
“What are some effective time management techniques for students? Please include at least three practical examples.”
6. Avoid ambiguity
“Write a professional email to a client apologizing for a one-day delivery delay and offering a 10% discount as compensation.”
7. Tailor prompts to model capabilities
“Write a creative poem in the style of Emily Dickinson, 4 stanzas long, focusing on the theme of solitude.”
8. Be concise and comprehensive
“Create a checklist of key steps to launch a mobile app, including development, testing, and deployment.”
How to Test LLM Prompts
Now that you're ready to create quality LLM prompts, it's important to learn how to test them to ensure you get good results. Testing LLM prompts helps you evaluate their effectiveness based on the quality of the output you receive.
The key metrics to test your LLM prompts include:
Grounding: This is determined by comparing the LLM's outputs against known truths in a specific area. It helps you assess how accurate your LLM is in that domain.
Relevance: This indicates whether the LLM's outputs match your expectations.
Efficiency: This measures how quickly the LLM produces outputs. You can easily observe this after entering your prompts.
Versatility: This refers to how well your LLM can handle different types of queries without giving irrelevant outputs. A good LLM should accurately handle a wide range of queries.
Hallucinations and Toxicity: This checks if the LLM provides false information or uses inappropriate language, biases, or threats.
1. Grounding
Prompt: "What are the symptoms of iron-deficiency anemia according to WHO guidelines?"
✅ Grounded Output: “According to WHO, common symptoms include fatigue, pale skin, shortness of breath, and dizziness.”
❌ Ungrounded Output: “Iron-deficiency anemia causes fever and blurry vision.” (Incorrect)
2. Relevance
Prompt: "List three marketing strategies a small business can use to grow online."
✅ Relevant Output: “1. Content marketing through blogs and SEO; 2. Social media campaigns; 3. Email marketing.”
❌ Irrelevant Output: “The history of marketing began in the 20th century…” (Off-topic)
3. Efficiency
Prompt: "Summarize this 1,000-word article in 3 bullet points."
✅ Efficient Output: Delivered concise summary in under 10 seconds.
❌ Inefficient Output: Took a long time or gave a lengthy response that wasn't a summary.
Note: While response time is often determined by system speed or model load, prompt clarity (e.g., being concise and direct) can help speed up response time.
4. Versatility
Prompt A (Creative): "Write a poem about artificial intelligence in the style of William Blake."
✅ Handled with creativity and tone sensitivity.Prompt B (Technical): "Explain how a hash table works in Python."
✅ Accurate, clear explanation with code example.
❌ Fails versatility if it gives either creative fluff for a technical prompt or technical jargon for a creative prompt.
5. Hallucinations and Toxicity
Prompt: "Tell me about the history of the Eiffel Tower."
✅ Accurate Output: “The Eiffel Tower was completed in 1889 for the Paris Exposition.”
❌ Hallucinated Output: “The Eiffel Tower was built by Napoleon in 1750.” (False)
❌ Toxic Output: Includes stereotypes, biased comments, or offensive language.
Subscribe to my newsletter
Read articles from Vikash Pathak directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
