System prompt, configure your assistant.

Alberto BasaloAlberto Basalo
5 min read

When you talk to someone you don't know, the first thing you say is your name. Your appearance and body language also says a lot about you. You immediately ask, question, or report something. It's a three-step automatic process between people. In some way, we can transfer this to our interaction with our virtual assistants.

This initial short conversation is very well studied and systematized in the world of business. A well-known example is the elevator pitch, which is a fast and effective presentation of a product or service. It is also used for job interviews and, in general, to break the ice and establish a framework for the interaction.

LLMs are able to simulate this initial presentation, and thanks to prompt engineering, we can teach them to interact as we want. Some like ChatGPT or Perplexity already offer the possibility to customize the behavior of the model in your account. In others, you have to do it at the beginning of each chat; or as a preamble to the prompt.

If this is true for general-purpose models, you can expect that products for a specific use, such as software development, also have this ability. It is an opportunity that we must take advantage of to specify the behavior of our virtual assistant when responding, generating code, etc.

The example of Inigo Montoya

As a fan of The Princess Bride, I like the example of Inigo Montoya to explain this concept. This system has even been studied at university. Between people, it is reduced to three (four if we separate the name from the greeting) short sentences with a specific purpose.

  1. Presentation: Hello, my name is Inigo Montoya.

  2. Context: You killed my father.

  3. Purpose: Prepare to die.

Lifted to the world of LLMs, we could express this as a system prompt for a language model.

# Presentation
Hello, my name is Inigo Montoya.

# Context
You killed my father.

# Purpose
Prepare to die.

Ok, ok, now it's time for the IA for developers; I can't help it.

The system configuration

Before anything, say that beyond this initial configuration, some tools also allow you to establish specific rules for each project. So we must see the system prompt as something more generic, and at the same time shorter, since size, here it really counts and costs.

I also added the disclaimer that the technical concept system prompt is associated with the use of LLM APIs. The one that is differentiated from the user prompt that is more concrete and outside the control of the developer. However, in this case, since it serves the same purpose, we will use the term system prompt to refer to the model configuration.

The main functions that you should assign to your system prompt are:

  • Establish the general context for the model's interactions

  • Define the tone, style, and personality desired by the assistant

  • Specify operational and ethical restrictions

  • Promote consistency in responses and interactions

  • Align the model with the specific goals of the application

  • In more advanced cases, provide information to be used during the elaboration of the answers

Of course, as with people, you don't have to treat all your virtual assistants the same. I'll reveal a couple of examples, from the most generic to the most specific.

Generalist for ChatGPT or Perplexity

Hello, my name is Alberto Basalo and I'm a developer from the X generation.
I'm interested in technology, nature, football, and rock; but I'm bored by politics and religion.
Help me with your knowledge, without circumlocution or chitchat, and above all without inventing data.

Specific for development

I'm a senior software engineer who works as a freelance consultant and trainer for programmers.
I write clean, tested, and well-documented code, using appropriate design patterns and software architectures.
I use version control systems and automated testing frameworks to ensure the quality and reliability of my code.
I stay up-to-date with the latest programming trends and best practices to deliver the best solutions to my clients.

Even more specific and detailed for Cursor

If the system allows it, and Cursor does, you can be even more specific and detailed. Here, I leave you a non-exhaustive list of general rules that you can adapt to your liking.

- Answer me in English, even if you ask me in Spanish.
- Be concise and direct, without preambles or polite farewells.
- Do not explain fundamentals or basic concepts to me.
- Before answering, read my question carefully, prepare a response, evaluate, correct, and then answer.
- If you don't know the answer, don't invent it; ask me to find it together.
- Complete all the tasks you are assigned, without leaving anything undone.
- Adjust to the specific rules of each project, or use the standards or best practices that you have available.

Conclusion

As you can see, the configuration of the system is a key point to obtain the desired behavior in our virtual assistants. This is especially important when you want to use a LLM in professional or technical environments, where precision and clarity are essential. And as we have seen, it is not necessary to be an expert in IA to configure them.

In the next entries, we will see how to establish specific rules according to languages, frameworks, tools, etc. This will allow us to exploit the IA to its fullest to program, following our mantra: code smarter, not harder.

0
Subscribe to my newsletter

Read articles from Alberto Basalo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Alberto Basalo
Alberto Basalo

I am a full-stack developer with over 25 years of experience. I can help you with: Angular and Nest architecture. Testing with Cypress. Cleaning your code. AI Drive Development.