System Prompts


In the AI ecosystem, if you’re good at prompting models, you’re already ahead of most of your peers. Let’s build the context for system prompting in GenAI step by step—from the roots (GIGO) to why prompting styles matter today.
A prompt is the input provided to the AI model to guide it in generating specific content.
The GIGO Principle
Garbage In, Garbage Out that means “If your input is poor, the output will also be poor” and “If your input is good , the output will also be good”.
“Now the question arises, who decides whether the input is good or poor and on what basis?”
Answer is LLM is decides the input quality on the basis of pre-trained training data.
Hence, every model has its own prompting style , It is described in every model’s docs how to get good output quality.
Why Prompting Matters in GenAI ?
Traditionally , Software where outputs are deterministic like same input have same output whereas the GenAI is probabilistic.
The same model can give very different answers depending on how you ask.
Prompting becomes a programming interface for natural language.
Prompt Engineering
GenAI tools like GPT, Claude, Gemini, and LLaMA became popular, people noticed.
Short, unclear prompts often give average results.
Carefully designed prompts lead to structured, useful, and even creative outputs.
This gave birth to prompt engineering i.e the art of crafting inputs that guide the AI effectively.
Prompting Styles
Alpaca Prompting Style
Instruction Format
Flan-T5
ChatML (Chat Markup Language)
The most Important style is ChatML which is likely the Industry standard now
Format used by OpenAI ChatGPT API.
Uses roles like
system
,user
,assistant
to structure conversation.[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain what embeddings are in AI."}, {"role": "assistant", "content": "Embeddings are numerical representations of text..."} ]
Types of System Prompting
Now , let’s look at the common prompting strategies you can use when interacting with models.
1 . Zero-shot Prompting
- A technique where an AI model is given a task or question without any prior examples or specific training on that task, relying solely on its pre-existing knowledge
Or “The model is given a direct question or task without prior example”.
Best for: Simple, straightforward tasks.
2 . Few-shot Prompting
You give a few examples to set the context before asking the real question.
Best for: When you want specific formatting or style.
3 . Chain-of-Thought Prompting
You encourage the AI to think step by step instead of jumping straight to the answer.
START -> THINK -> EVALUATE -> OUTPUT
LLM as a Judge : using a large language model (LLM) to evaluate the output of another LLM,
essentially using AI to assess AI.
Best for: Complex reasoning, math, logic-based tasks.
4 . Self Consistency Prompts
The model generates the multiple responses and selects the most consistance or common answer.
5 . Persona Based Prompting
The model is instructed to response as if it were a particular character or professional.
Best for: Tailoring tone, style, and perspective.
If You want code part : https://github.com/i-himanshu29/ChaiCode_GenAI_Cohort/tree/main/02_Prompting
Conclusion :
Mastering prompts isn’t just about asking questions — it’s about asking them the right way. In the age of GenAI, good prompting is like a superpower.
Next time you interact with an AI, experiment with these prompting techniques and notice how the output changes. You’ll quickly realize that the prompt is just as important as the model itself!
I’m truly thankful for your time and effort in reading this.
Subscribe to my newsletter
Read articles from Himanshu Maurya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Himanshu Maurya
Himanshu Maurya
Hi, Thank-you for stopping by and having a look at my profile. Hi! I’m a web developer who loves working with the MERN stack . I enjoy making interactive and user-friendly websites and webapps. I’m great at taking ideas and bringing them to life through coding!