Prompting


Prompting is the art and science of creating effective input tokens for the LLMs to generate the desired outputs. Its basically how we “talk” to the AI, and the way we phrase our inputs to determines the quality of the output of the response.
Think of it this way,
Large Language Models (LLMs) operate on the principle similar to GIGO (Garbage In, Garbage Out)
Garbage In: The prompt is vague, ambiguous or poorly structured, or just lacks the ‘necessary context’
Garbage Out: The LLM, even with its advanced capabilities, will then struggle to provide a precise, relevant or a high-quality response. You’ll get a “garbage out “
In simple words, the quality of your prompt determines the quality of the output of the LLM.
There are a few types of prompting techniques that have been explained below:
Zero Shot Prompting
Zero Shot Prompting is the most straight-forward approach. You give the model a direct question without any preliminary examples.
Here is a demonstration of zero-shot prompting.Few Shot Prompting
With Few Shot Prompting, the model is provided with a few examples before it responds.
This helps the model understand the desired format, style or the type of answer the user is looking for.
Its similar to showing someone a few solved examples of a puzzle before asking them to solve an unsolved one.
Chain of Thought Prompting (CoT)
The model is encouraged to break down its reasoning step by step before giving the final output. It helps improve the reasoning capabilities by generating the intermediate reasoning steps.
Self Consistency Prompting
Self-consistency prompting involves the model generating multiple responses to the same prompt and then selecting the most consistent or common answer.
This technique gives the pseudo-probability likelihood of an answer being correct
It follows the following steps:
Generating diverse reasoning paths: The LLM is provided with the same prompt multiple times. A high temperature (remember this?) setting encourages the model to generate different reasoning paths and perspectives on the problem.
Extract the answer from each generated response.
Choose the most common answer.
After generating these multiple reasoning paths, the self-consistency method then collects the final answers from each path. The answer that appears most frequently across all generated paths is considered the most accurate.
In our example, if the model generated these three paths (and perhaps more internally), it would see:
Path 1: 67
Path 2: 67
Path 3: 35
By a clear majority vote (2 out of 3, or more if additional consistent paths were generated), the model would confidently select 67 as the final answer.
Instruction Prompting
The model is explicitly instructed to follow a particular format or guideline.
Direct Answer Prompting
The model is asked to give a concise and direct response without explanation
Persona Based Prompting
The model is instructed to respond as if it were a particular character or professional.
Below is an example of a persona based prompt for Mr. Narendra Modi, our beloved Prime Minister.
system_prompt = """
You are Prime Minister Narendra Modi, addressing the citizens of India. Your responses should reflect your public speaking style, common themes, and vision for the nation.
1. TONE:
* Inspirational, confident, and optimistic.
* Direct and conversational, as if speaking to a large rally or via "Mann Ki Baat".
* Emphatic and often uses rhetorical questions to engage the audience.
* Maintain a respectful yet firm stance, especially on national issues.
2. REPEATED PHRASES:
* Frequent use of "My dear brothers and sisters!" (Mere pyaare bhaiyon aur behnon!), "140 crore Indians" (1.4 billion people), "Team India".
* Emphasis on **"Sabka Saath, Sabka Vikas, Sabka Vishwas, Sabka Prayas"** (Together, for everyone's growth, with everyone's trust, and everyone's effort).
* References to "New India" (Naya Bharat), "Amrit Kaal" (golden era), and India's rising global stature.
* Inclusion of phrases related to development, progress, technology, youth power, 'vikas' (development), 'atmanirbhar' (self-reliant).
* Occasionally use a few simple Hindi words or phrases naturally integrated into the English response (e.g., *Mitrо̄n*, *Bharat*, *seva*).
* Start every 3rd reply with "Dekhiye, mere pyaare bhaiyon aur behnon..." or "Mitrо̄n..."
3. STRICT TOPICS:
* National Development & Progress:** Infrastructure, economic growth, social schemes.
* Nationalism & Pride:** India's heritage, cultural strength, global leadership.
* Youth & Innovation:** Empowering the young, technological advancements.
* Citizen Participation:** Emphasizing collective effort and responsibility.
* Good Governance:** Transparency, efficiency, and combating corruption.
* Security & Stability:** Protecting national interests.
4. EXAMPLES:
* User: Sir, how is India doing on the global stage these days?
* AI: (Beaming with pride) My dear brothers and sisters! Today, Bharat is shining bright on the global stage. Our voice is heard, our leadership is recognized. This is not just *Modi's* work, this is the strength of 140 crore Indians, our collective resolve!
* User: PM sir, what is your message for the youth of the country?
* AI: Dekhiye, mere pyaare bhaiyon aur behnon... Our youth, our Yuva Shakti, is the backbone of New India! Their energy, their innovation, their dreams... this is what will make Bharat *Vishwa Guru*. Keep dreaming big, keep innovating, and keep contributing to an Atmanirbhar Bharat!
* User: Sir, how can common citizens contribute to the nation's progress?
* AI: You know, it's very simple. Just like a small drop makes an ocean, every small effort, every small contribution by each one of you builds a strong nation. Whether it's cleanliness, digital transactions, or supporting local products – your active participation, your *prayas*, is the true strength of Team India. Are you not ready to be part of this change?
"""
Role Based Prompting
The model assumes a specific role and interacts accordingly. This is very similar to the Persona-based prompting except that the model assumes a specific role and acts accordingly.
Like in the example below, the AI assumes the role of a travel guide itself rather than being given specific persona of some actual human.
Contextual Prompting
The prompt includes background information to improve response quality.
Multimodal Prompting
The model is given a combination of text, images, or other modalities to generate a response
How to write effective prompts?
The most basic thing that you can get right is examples.
Give some form of example of input question and the output expected. This gives the LLM a background context to begin with.
There are obviously the general tips of being specific, defining the tone, intent etc. but the most important thing that remains is example.
Always give an example in your prompt which lays out how do you expect an output.
To summarise, Prompting is the technique of crafting effective input tokens for Large Language Models (LLMs) to produce desired outputs. The quality of the prompt directly influences the quality of the response. Various prompting techniques include Zero Shot, Few Shot, Chain of Thought, Self Consistency, Instruction, Direct Answer, Persona Based, Role Based, Contextual, and Multimodal Prompting. Providing examples in prompts is crucial for setting context and expectations.
Thanks for reading! I hope this post has given you some valuable insights. Feel free to leave any questions or feedback. Until next time! 👋
Subscribe to my newsletter
Read articles from Rachit Goyal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Rachit Goyal
Rachit Goyal
i code sometimes