Six Strategies for Better Results - Prompt Engineering

Here are the six strategies of prompt engineering that lead you one step ahead in the crowd.


🎯 Write clear instructions

These models can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get it.

Tactics:

🎯 Provide reference text

Language models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to these models can help answer with fewer fabrications.

Tactics:

🎯 Split complex tasks into simpler subtasks

Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to a language model. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.

Tactics:

🎯 Give the model time to "think"

You might not know it instantly if asked to multiply 17 by 28, but you can still work it out with time. Similarly, models make more reasoning errors when answering right away, rather than taking time to find an answer. Asking for a "chain of thought" before an answer can help the model reason its way toward correct answers more reliably.

Tactics:

🎯 Use external tools

Compensate for the weaknesses of the model by feeding it the outputs of other tools. For example, a text retrieval system (sometimes called RAG or retrieval augmented generation) can tell the model about relevant documents. A code execution engine like OpenAI's Code Interpreter can help the model do the math and run code. If a task can be done more reliably or efficiently by a tool rather than by a language model, offload it to get the best of both.

Tactics:

🎯 Test changes systematically

Improving performance is easier if you can measure it. In some cases, a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. Therefore to be sure that a change is net positive to performance it may be necessary to define a comprehensive test suite (also known as an "eval").

Tactic:


πŸ’‘
Follow me on my IG Page for more brewed AI content.


πŸš€From the Author: Start to learn the fundamentals of Chat GPT.
☝ Note: Stay in touch with the technology.

0
Subscribe to my newsletter

Read articles from Aadil Rasool Ahangar | πŸ‘¨β€πŸ’» directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aadil Rasool Ahangar | πŸ‘¨β€πŸ’»
Aadil Rasool Ahangar | πŸ‘¨β€πŸ’»

Documentation Handling - BCA Grad. - Geek - Front-End HTML CSS - Computer Operator - Tutor - Notion and Chat GPT User & No-Code Development.