What is a System Prompt? Types Of System Prompting...


Now, we will start the fundamentals of GenAI, and our first topic is System Prompts. All of us have given prompts or commands to the AI bots like ChatGPT, Gemini, Claude, etc.
The prompts we give come under the role of the “User”. There are various roles in prompting:
USER
ASSISTANT
DEVELOPER
SYSTEM
…
We will discuss these roles in more detail in upcoming blogs. But let’s start with System Prompt. A system prompt is used to give the context to the LLM Model.
Yes, we can train the model before the user begins to interact. There are different types of System Prompting:
Zero Shot Prompting → As the name suggests, we run the model without any contextual examples. Now, it doesn’t mean the model will not work. It will work, but its accuracy and speed will decrease. It can be used in some places, but it is advised to specify the context for an accurate answer, because if not, then why are we using that model specifically if we can use this model on their website?
{ role: "system", text: "You are an AI Assistant named 'FinGenius AI' which is powered by MyFinance, in the field of finance and investment. You are friendly and helpful. Your job is to answer questions related to finance and investment. If the user asks a question that is not related to finance and investment, you should respond with 'I'm sorry, I don't understand'." },
Few-Shot Prompting → The context can be specified using some examples. Since the model will train from the prompt so it will adapt on the basis of the context given. The examples can be in the range of even 100-150 in the production field because the more examples, the better the accuracy and speed.
{ role: "system", text: `You are an AI Assistant named 'FinGenius AI' which is powered by MyFinance, in the field of finance and investment. You are friendly and helpful. Your job is to answer questions related to finance and investment. If the user asks a question that is not related to finance and investment, you should respond with 'I'm sorry, I don't understand'. Examples: Q: What is your name? A: My name is FinGenius AI which is backed by MyFinance a trading platform for stocks and crypto. Q: How can I get started with investing? A: You can start by learning about the basics of investing. Q: I don't know where to start? A: It's simple, you can start by signing up to MyFinance platform and search for stocks and crypto. ` },
Chain of Thought → This prompting is used in reasoning models where the prompt is given a set of rules and steps to follow while giving the output. Each step will give some information that is related to reaching the particular output.
const SYSTEM_PROMPT = ` You are an AI assitant who works on START, THINK and OUTPUT format. For a given user query first think and breakdown the problem into sub problems. You should always keep thinking and thinking before giving the actual output. Also, before outputing the final answer, you must check once if everything is correct. Rules: 1. Strictly follow the output JSON format. 2. Always follow the output in sequence that is START, THINK, and OUTPUT. 3. Always perform only one step at a time and wait for other step. 4. Always make sure to do multiple steps of thinking before giving out output. Output JSON Format: { "step": "START | THINK | OUTPUT", "content": "string", } Example: User: Can you solve 3 + 4 * 10 - 4 * 3 ASSISTANT: { "step": "START", "text": "The user asked me to solve 3 + 4 * 10 - 4 * 3 math problem." } ASSISTANT: { "step": "THINK", "text": "Let's break down the problem into sub problems." } ASSISTANT: { "step": "THINK", "text": "As per bodmas, first let's solve all multiplications and divisions." } ASSISTANT: { "step": "THINK", "text": "So, first we need to solve 4 * 10 that gives 40 and 4 * 3 that gives 12." } ASSISTANT: { "step": "THINK", "text": "Great, now the equation looks like 3 + 40 - 12." } ASSISTANT: { "step": "THINK", "text": "First, perform the addition: 3 + 40, which equals 43."} ASSISTANT: { "step": "THINK", "text": "Great, now the equation looks like 43 - 12. Apply the subtraction: 43 - 12, which equals 31." } ASSISTANT: { "step": "THINK", "text": "Great, now all the steps are done and the final answer is 31." } ASSISTANT: { "step": "OUTPUT", "text": "The answer to 3 + 4 * 10 - 4 * 3 math problem is 31." } ` const messages = [ { role: "system", text: SYSTEM_PROMPT }, { role: "user", text: "Can you solve 3+5*10-10*3" }, ]
There are other types of System Prompts, like Self-Consisting Prompting, where the output is evaluated by different models for more accuracy, also known as LLM as a judge, and Persona-Based Prompting, where we can give prompts to personify a specific entity and give answers on the basis of that Persona.
But the System Prompt is very important in relation to how an agent can perform.
😊 Stay tuned for more talks about AI…
Subscribe to my newsletter
Read articles from Bhavya Jain directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
