Tips for Mastering Prompt Engineering in Large Language Models

Nikhil IkharNikhil Ikhar
11 min read

Table of contents

I will use simple GPT-3.5 to explain various prompt techniques. We are just scratching the surface, but it's important to know these techniques as they provide direction.

This basic code is used to get results from the LLM. There are multiple ways to use prompts. In the get_completion method, we are not storing any responses, so it will provide a fresh response each time.

import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key  = os.getenv('OPENAI_API_KEY')
def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]
  1. Large Language Models (LLMs) are designed to respond to the instructions we provide. It is crucial to communicate our requirements clearly to ensure the model understands exactly what we want it to do. This involves crafting prompts that are specific and detailed. There are various ways to phrase our requests, and experimenting with different approaches can yield diverse results. One of the first aspects to consider is the format of the output we desire. By specifying the output format, we can guide the model to present information in a way that best suits our needs, whether it's a list, a paragraph, or another structure. Let's delve into how we can effectively manage and customize the output format to enhance the usefulness of the responses we receive from the LLM.

     prompt = f"""
     Generate a list of three 80's bollywood titles along \ 
     with their movie, title, authors and length. 
     Provide them in JSON format with the following keys: 
     movie, title, author, length.
     """
     response = get_completion(prompt)
     print(response)
    
     [
         {
             "movie": "Mr. India",
             "title": "Mr. India",
             "author": "Salim-Javed",
             "length": "179 minutes"
         },
         {
             "movie": "Chandni",
             "title": "Chandni",
             "author": "Yash Chopra",
             "length": "186 minutes"
         },
         {
             "movie": "Qayamat Se Qayamat Tak",
             "title": "Qayamat Se Qayamat Tak",
             "author": "Nasir Hussain",
             "length": "162 minutes"
         }
     ]
    
  2. We can request the Language Learning Model (LLM) to provide detailed, step-by-step instructions for a variety of tasks. This approach allows us to break down complex processes into manageable parts, making it easier to understand and follow. By asking for step-by-step guidance, we can ensure that each stage of the task is clearly outlined, which helps in minimizing errors and improving efficiency.

     text_1 = f"""
     Making a cup of tea is easy! First, you need to get some \ 
     water boiling. While that's happening, \ 
     grab a cup and put a tea bag in it. Once the water is \ 
     hot enough, just pour it over the tea bag. \ 
     Let it sit for a bit so the tea can steep. After a \ 
     few minutes, take out the tea bag. If you \ 
     like, you can add some sugar or milk to taste. \ 
     And that's it! You've got yourself a delicious \ 
     cup of tea to enjoy.
     """
     prompt = f"""
     You will be provided with text delimited by triple quotes. 
     Convert them in step by step instructions.
    
     If the text does not contain a sequence of instructions, \ 
     then simply write \"No steps provided.\"
    
     \"\"\"{text_1}\"\"\"
     """
     response = get_completion(prompt)
     print("Completion for Text 1:")
     print(response)
    
     Completion for Text 1:
     1. Get some water boiling.
     2. Grab a cup and put a tea bag in it.
     3. Once the water is hot enough, pour it over the tea bag.
     4. Let it sit for a bit so the tea can steep.
     5. After a few minutes, take out the tea bag.
     6. Add sugar or milk to taste.
     7. Enjoy your delicious cup of tea.
    

    We will ask the LLM to provide the output in a different format.

     prompt = f"""
     You will be provided with text delimited by triple quotes. 
     Convert them in step by step instructions. Follow this format.
    
     step 1: ..
     step 2: ..
     ..
     step n: ..
     If the text does not contain a sequence of instructions, \ 
     then simply write \"No steps provided.\"
    
     \"\"\"{text_1}\"\"\"
     """
     response = get_completion(prompt)
     print("Completion for Text 1:")
     print(response)
    
     Completion for Text 1:
    
     step 1: Get some water boiling.
     step 2: Grab a cup and put a tea bag in it.
     step 3: Pour the hot water over the tea bag.
     step 4: Let the tea steep for a few minutes.
     step 5: Remove the tea bag.
     step 6: Add sugar or milk to taste.
     step 7: Enjoy your cup of tea.
    
  3. In the above prompt, we used specific delimiters to enclose parts of the input. These delimiters include triple backticks (```), triple quotes ("""), angle brackets (< >), and HTML-like tags (<tag> </tag>). These markers help to clearly define and separate the text that needs to be processed or highlighted, ensuring that the instructions or content within them are easily identifiable and distinct from the rest of the text.

     prompt = f"""
     Summarize the text delimited by triple backticks \ 
     into a single sentence.
     ```{text}```
     """
     response = get_completion(prompt)
     print(response)
    
  4. We can guide the Language Learning Model (LLM) by providing an example, which is known as a single-shot prompt. In this approach, we present one example to the model to illustrate the desired output. Additionally, there is the concept of few-shot prompts, where we provide multiple examples to help the model understand the pattern or context better. This method is useful for improving the model's performance by giving it a clearer idea of what is expected in terms of output.

     prompt = f"""
     Your task is to answer in a consistent style.
    
     <child>: Teach me about patience.
    
     <grandparent>: The river that carves the deepest \ 
     valley flows from a modest spring; the \ 
     grandest symphony originates from a single note; \ 
     the most intricate tapestry begins with a solitary thread.
    
     <child>: Teach me about resilience.
     """
     response = get_completion(prompt)
     print(response)
    
     <grandparent>: The tallest trees weather the strongest storms; the brightest stars shine in the darkest nights; the strongest hearts are forged in the hottest fires.
    
  5. We can instruct the model to carefully evaluate its response before jumping to any conclusions. By doing so, we ensure that the model takes the time to consider all aspects of the input and provides a thoughtful and accurate output. This approach helps in minimizing errors and enhances the reliability of the model's responses. It is important to emphasize the value of patience and thoroughness in the evaluation process to achieve the best possible results.

     prompt = f"""
     Your task is to determine if the student's solution \
     is correct or not.
     To solve the problem do the following:
     - First, work out your own solution to the problem including the final total. 
     - Then compare your solution to the student's solution \ 
     and evaluate if the student's solution is correct or not. 
     Don't decide if the student's solution is correct until 
     you have done the problem yourself.
    
     Use the following format:
     Question:
     ```
     question here
     ```
     Student's solution:
     ```
     student's solution here
     ```
     Actual solution:
     ```
     steps to work out the solution and your solution here
     ```
     Is the student's solution the same as actual solution \
     just calculated:
     ```
     yes or no
     ```
     Student grade:
     ```
     correct or incorrect
     ```
    
     Question:
     ```
     I'm building a solar power installation and I need help \
     working out the financials. 
     - Land costs $100 / square foot
     - I can buy solar panels for $250 / square foot
     - I negotiated a contract for maintenance that will cost \
     me a flat $100k per year, and an additional $10 / square \
     foot
     What is the total cost for the first year of operations \
     as a function of the number of square feet.
     ``` 
     Student's solution:
     ```
     Let x be the size of the installation in square feet.
     Costs:
     1. Land cost: 100x
     2. Solar panel cost: 250x
     3. Maintenance cost: 100,000 + 100x
     Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000
     ```
     Actual solution:
     """
     response = get_completion(prompt)
     print(response)
    
     Let x be the size of the installation in square feet.
    
     Costs:
     1. Land cost: $100 * x
     2. Solar panel cost: $250 * x
     3. Maintenance cost: $100,000 + $10 * x
    
     Total cost for the first year of operations:
     Total cost = Land cost + Solar panel cost + Maintenance cost
     Total cost = $100x + $250x + $100,000 + $10x
     Total cost = $360x + $100,000
    
     The total cost for the first year of operations as a function of the number of square feet is 360x + 100,000.
    
     Is the student's solution the same as the actual solution just calculated:
     ```
     No
     ```
     Student grade:
     ```
     Incorrect
     ```
    
  6. We can guide a Large Language Model (LLM) to generate responses with a limited word or character count. This involves setting constraints to ensure concise and focused outputs, useful for fitting within character limits or providing brief answers. This is especially helpful when brevity is essential, allowing the LLM to deliver precise information.

     prompt = f"""
     ...
     Use at most 50 words.
     """
    
  7. We can request the model to identify the sentiment or emotion.

     prompt = f"""
     What is the sentiment of the following text, which is delimited with triple backticks?
     Review text: '''{review}'''
     """
     response = get_completion(prompt)
     print(response)
    
     prompt = f"""
     What is the emotion of the following text, which is delimited with triple backticks?
     Review text: '''{review}'''
     """
     response = get_completion(prompt)
     print(response)
    
  8. We can include multiple tasks or requests in a single prompt when interacting with a Large Language Model (LLM). By doing so, we can ask the model to perform various actions or provide different types of information in one go. For example, you might want the model to summarize a text, identify its sentiment, and suggest improvements all within the same prompt. This approach can be efficient and time-saving, as it allows us to gather comprehensive insights or perform multiple analyses without needing separate prompts for each task.

     prompt = f"""
     Identify the following items from the review text: 
     - Sentiment (positive or negative)
     - Is the reviewer expressing anger? (true or false)
     - Item purchased by reviewer
     - Company that made the item
    
     The review is delimited with '''. 
     Format your response as a JSON object with keys as, "Sentiment", "Anger", "Item" and "Brand".
     If the information isn't present, use "unknown" as the value.
     Make your response as short as possible.
    
     Review text: '''{lamp_review}'''
     """
     response = get_completion(prompt)
     print(response)
    
     {
         "Sentiment": "positive",
         "Anger": "false",
         "Item": "lamp",
         "Brand": "Lumina"
     }
    
  9. We can use it to detect the language, translate the language, or change the tone.

     prompt = f"""
     Translate the following English text to Spanish: \ 
     ```Hi, I would like to order a blender```
     """
     response = get_completion(prompt)
     print(response)
    
     prompt = f"""
     Tell me which language this is: 
     ```Combien coûte le lampadaire?```
     """
     response = get_completion(prompt)
     print(response)
    
     prompt = f"""
     Translate the following from slang to a business letter: 
     'Dude, This is Joe, check out this spec on this standing lamp.'
     """
     response = get_completion(prompt)
     print(response)
    
  10. We can give instructions for translating text between languages, keeping the original meaning and context. We can also change informal language to a formal tone for business use. This lets us adapt content for different needs, whether personal or professional.

    data_json = { "resturant employees" :[ 
        {"name":"Shyam", "email":"shyamjaiswal@gmail.com"},
        {"name":"Bob", "email":"bob32@gmail.com"},
        {"name":"Jai", "email":"jai87@gmail.com"}
    ]}
    
    prompt = f"""
    Translate the following python dictionary in ``` from JSON to an HTML \
    table with column headers and title.
    
    Translate the following python dictionary from JSON to XML.
    ```{data_json}```
    """
    response = get_completion(prompt)
    print(response)
    
    For the HTML table:
    
    ```html
    <!DOCTYPE html>
    <html>
    <head>
        <title>Restaurant Employees</title>
    </head>
    <body>
    
        <h1>Restaurant Employees</h1>
    
        <table border="1">
            <tr>
                <th>Name</th>
                <th>Email</th>
            </tr>
            <tr>
                <td>Shyam</td>
                <td>shyamjaiswal@gmail.com</td>
            </tr>
            <tr>
                <td>Bob</td>
                <td>bob32@gmail.com</td>
            </tr>
            <tr>
                <td>Jai</td>
                <td>jai87@gmail.com</td>
            </tr>
        </table>
    
    </body>
    </html>
    ```
    
    For the XML:
    
    ```xml
    <restaurant_employees>
        <employee>
            <name>Shyam</name>
            <email>shyamjaiswal@gmail.com</email>
        </employee>
        <employee>
            <name>Bob</name>
            <email>bob32@gmail.com</email>
        </employee>
        <employee>
            <name>Jai</name>
            <email>jai87@gmail.com</email>
        </employee>
    </restaurant_employees>
    ```
    
  11. Large Language Models (LLMs) have the capability to analyze and understand text, making them highly effective tools for generating content tailored for customer support. These models can process input data, such as customer inquiries or feedback, and produce coherent and contextually appropriate responses. By leveraging natural language processing, LLMs can interpret the nuances of customer queries, ensuring that the generated responses are not only accurate but also empathetic and engaging.

    prompt = f"""
    You are a customer service AI assistant.
    Your task is to send an email reply to a valued customer.
    Given the customer email delimited by ```, \
    Generate a reply to thank the customer for their review.
    If the sentiment is positive or neutral, thank them for \
    their review.
    If the sentiment is negative, apologize and suggest that \
    they can reach out to customer service. 
    Make sure to use specific details from the review.
    Write in a concise and professional tone.
    Sign the email as `AI customer agent`.
    Customer review: ```{review}
    

    Review sentiment: {sentiment} """ response = get_completion(prompt) print(response) ```

This is not an exhaustive list. LLMs are effective when given a clear prompt, but there are situations where the prompt can become too large to cover all use cases, or prompt engineering alone may not provide satisfactory results. In such cases, fine-tuning is necessary. As you continue your journey with LLMs, remember that prompt engineering is an iterative process. Experiment, refine your prompts, and don't be afraid to get creative. The possibilities are truly limitless. By mastering this art, you'll unlock the full potential of these powerful AI tools and open doors to innovative applications across various domains. So, start crafting those prompts and witness the magic of language models unfold!

0
Subscribe to my newsletter

Read articles from Nikhil Ikhar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nikhil Ikhar
Nikhil Ikhar