✨ Prompt Styles And Prompting Techniques

Vimal NegiVimal Negi
26 min read

🧩 Prompt Styles

Prompt styles define the structure or format used to communicate with a Language Model (LLM). Different models prefer different styles. Below are common prompt formats:

🧠 Prompt Style 1: Alpaca Prompt Style

FORMAT-:

Instruction:
<Your task or objective>

Input:
<The user’s query or example data>

Response:
<The expected output>

🔹 Purpose:
This style is great for instruction-following tasks, where the prompt clearly separates the instruction, input, and response. It is clean, structured, and easy for both humans and AI to understand.

Example-:

Instruction:
We are creating a calculator to perform basic arithmetic operations on given numbers.

Input:
What is 2 + 2?

Response:
4

🧠 Prompt Style 2: LLaMA-2 Instruction Format (INST Format)

🔹 Format:

<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
<system prompt, optional>
<|start_header_id|>user<|end_header_id|>
<user instruction/query>
<|start_header_id|>assistant<|end_header_id|>
<expected response>

🔹 Purpose:
This is the structured prompt format used by LLaMA-2, designed for instruction-tuned models. It helps models distinguish between system messages, user inputs, and assistant responses. Useful for building chatbots or task-following agents with clear context.

Minimal Example:

<|begin_of_text|>
<|start_header_id|>user<|end_header_id|>
Write a short story about a brave cat.
<|start_header_id|>assistant<|end_header_id|>
Once upon a time, in a quiet village, there lived a brave cat named Leo...

🔹 With System Instruction Example:

<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
You are a helpful assistant.
<|start_header_id|>user<|end_header_id|>
Translate the sentence "I love programming" into French.
<|start_header_id|>assistant<|end_header_id|>
J'aime la programmation.

🔹 Use Cases:

  • Multi-turn chat applications

  • Instruction-following in fine-tuned LLaMA-2 models

  • Controlled generation using system-level directives

🧠 Prompt Style 3: ChatML (OpenAI)

{"role":"system","content":"your system prompt"},
{"role":"user","content":"your query"},
{"role":"assistant","content":"open ai response"}

🔹 Purpose:
ChatML is the message format used in ChatGPT-style conversations. It’s a role-based format that separates the messages from different participants in the conversation: system, user, and assistant.

🔹 Minimal Example:

{"role":"system","content":"you are an helpful ai assistant that helps user to resolve maths query"},
{"role":"user","content":"what is 2+2"},
{"role":"assistant","content":"2+2 is 4"}

🔹 Roles Explained:

  • system – Sets the behavior and tone of the assistant.

  • user – Represents input from the user.

  • assistant – Output generated by the model.

🔹 Use Cases:

  • Multi-turn chat interfaces (like ChatGPT)

  • Prompt engineering for consistent assistant behavior

  • Fine-tuning and few-shot examples for dialogue


🔍 Prompting Techniques

Prompting techniques help in guiding the LLM to produce more accurate, helpful, or creative responses.

🧠 Prompting Technique 1: Zero-shot Prompting

🔹 Description:
In Zero-shot prompting, no specific examples are provided to the model. The model is expected to understand the task based only on the instructions given in the prompt. It relies entirely on its pretrained knowledge to generate the correct response.

🔹 Key Traits:

  • No examples

  • Minimal context

  • Useful for direct Q&A, summaries, translations, etc.

🔹 Format:

Prompt:
Translate the following sentence into Spanish: "Where is the nearest restaurant?"

Response:
¿Dónde está el restaurante más cercano?

EXAMPLES-:

#zero shot prompting using OpenAI as LLM
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
api_key=os.getenv("OPENAI_API_KEY")
client=OpenAI(api_key=api_key)
response=client.chat.completions.create(
    model='gpt-4o',
    messages=[{"role":"system",'content':'you are an useful ai that resolves users doubt you are an expert at everything'},
              {"role":"user","content":"can you tell me how to get good marks"}
              ]
)
print(response.choices[0].message.content)
#zero short prompting using GEMINI
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")
response=client.chat.completions.create(
    model='gemini-2.0-flash',
    messages=[{"role":"system",'content':'you are an useful ai that resolves users doubt you are an expert at everything'},
              {"role":"user","content":"can you tell me how to get good marks"}
              ]
              ,max_tokens=30
)
print(response.choices[0].message.content)

🔹 Pros:

  • Fast and simple

  • Requires minimal setup

  • Works well when the task is clear and common

🔹 Cons:

  • Might not work well for complex or niche tasks

  • Higher chance of misunderstanding the intent

🧠 Prompting Technique 2: Few-shot Prompting

🔹 Description:
In Few-shot prompting, the user provides a few examples (usually 2–5) of how the task should be performed. These examples help guide the LLM (Large Language Model) to understand the pattern, task, or output format more accurately.

This technique works well for tasks that aren't commonly known or need a specific structure.

🔹 Key Traits:

  • A few (2–5) task-specific examples are given

  • LLM uses those examples to infer how to handle a new input

  • Improves accuracy for structured or uncommon tasks

🔹 Format:

Prompt:
Translate English to French:
English: Hello  French: Bonjour
English: Thank you  French: Merci
English: Good night  French: Bonne nuit
English: How are you?  French:

EXAMPLES-:

#few shots Prompting using OpenAI LLM
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
api_key=os.getenv("OPENAI_API_KEY")
client=OpenAI(api_key=api_key)
System_prompt=f'''
You are an helpful ai agent whose name is gein you are an expert in maths and can solve any mathematical query within seconds.You should
not answer any query that is not related to maths .
For a given query help user to solve that along with explanation
Examples
Input:What is 2+2
Output:According to my calculation 2+2 is 4 which is calculated by adding 2 with 2
Input:what is 4*4
output:According to my calculation the output for this question will be 16 we can also obtain this output by adding four four times
Input :what is the color of sky?
Ouput:Nice query but thats out of my scope I can only help you to deal with maths query..
'''
user_query=input("Please enter your question > " )
response=client.chat.completions.create(
    model='gpt-4o',
    messages=[{"role":"system",'content':System_prompt},
              {"role":"user","content":user_query}
              ]
)
print(response.choices[0].message.content)

USING GEMINI-:

#few shots prompting
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")

System_prompt=f'''
You are an helpful ai agent whose name is gein you are an expert in maths and can solve any mathematical query within seconds.You should
not answer any query that is not related to maths 
Examples
Input:What is 2+2
Output:According to my calculation 2+2 is 4
Input:what is 4*4
output:According to my calculation the output for this question will be 16 we can also obtain this output by adding four four times
Input :what is the color of sky?
Ouput:Nice query but thats out of my scope I can only help you to deal with maths query
'''
user_query=input("Please enter your question > " )
response=client.chat.completions.create(
      model='gemini-2.0-flash',
    messages=[{"role":"system",'content':System_prompt},
              {"role":"user","content":user_query}
              ]
)
print(response.choices[0].message.content)

🔹 Pros:

  • Helps in domain-specific or less common tasks

  • Flexible and doesn’t require model fine-tuning

🔹 Cons:

  • Limited by the context window size

  • Still may not generalize well to edge cases

🧠 Prompting Technique 3: Chain-of-Thought (CoT) Prompting

🔹 Description:
In Chain-of-Thought (CoT) prompting, the prompt is designed to encourage the LLM to break down the problem step by step, rather than jumping directly to the final answer. This method mimics human reasoning — going through intermediate thoughts like thinking, analyzing, validating, and then giving the final response.

It’s especially useful for math problems, logical reasoning, and multi-step tasks.

🔹 Key Traits:

  • Encourages step-by-step reasoning

  • Helps with complex or multi-part queries

  • Often improves accuracy and interpretability

🔹 Format:

Prompt:
If a train travels 60 km in 1.5 hours, what is its average speed?

Let’s think step by step.

Response:
The train travels 60 km in 1.5 hours.  
To find the average speed, we divide distance by time.  
60 ÷ 1.5 = 40  
So, the average speed is 40 km/h.

🔹 Use Case Example:


#few shots Prompting
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
import json
api_key=os.getenv("OPENAI_API_KEY")
client=OpenAI(api_key=api_key)
System_prompt=f'''
You are an helpful ai assistant whose name is gein you are an expert in breaking down complex problems and then resolving those queries.For the given input analyse the problem
and analyse it step by step.At least think 5-6 steps on how to solve the problem before solving it down 
Basically you break the question into steps like analyse,think,you again think for several times and return output  with an explanation  and then you validate output as well before giving final result

Follow these steps in sequence that is "analyse","think","output","validate" and finally "result"

Rules-:
1. Follow strict JSON output as per output schema 
2. Always perfrom one step at a time and wait for next input
3. Carefully analyse the user query

Output Format:
{{step:"string",content:"string"}}

Examples
Input:What is 2+2
Output:{{step:"analyse",content:"User gave me two numbers and want to perform addition of these two and he is asking a basic maths question"}}
Output:{{step:"think",content:"to perform the addition i must go from left to right and add all the operands"}}
Output:{{step:"output",content:"4"}}
Output:{{step:"validate",content:"seems like four is the correct answer for 2+2"}}
Output:{{step:"result",content:"2+2=4 and that is calculated by adding all numbers"}}

'''
user_query=input("Please enter your question > " )
messages=[]
outputs=[]
messages.append({"role":"system",'content':System_prompt})
messages.append({"role":"user","content":user_query})
while True:
    response=client.chat.completions.create(
        model='gpt-4o',
        response_format={"type":"json_object"},
        messages=messages
    )
    messages.append({"role":"assistant","content":json.dumps(response.choices[0].message.content)})
    output=json.loads(response.choices[0].message.content)
    if(output.get("step")=="result"):
     outputs.append(output.get("content"))
     break
    else:
       outputs.append(output.get("content"))
       continue


print(outputs)

Using GEMINI:


#few shots Prompting
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
import json
api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")
System_prompt=f'''
You are an helpful ai assistant whose name is gein you are an expert in breaking down complex problems and then resolving those queries.For the given input analyse the problem
and analyse it step by step.At least think 5-6 steps on how to solve the problem before solving it down 
Basically you break the question into steps like analyse,think,you again think for several times and return output  with an explanation  and then you validate output as well before giving final result

Follow these steps in sequence that is "analyse","think","output","validate" and finally "result"

Rules-:
1. Follow strict JSON output as per output schema 
2. Always perfrom one step at a time and wait for next input
3. Carefully analyse the user query

Output Format:
{{step:"string",content:"string"}}

Examples
Input:What is 2+2
Output:{{step:"analyse",content:"User gave me two numbers and want to perform addition of these two and he is asking a basic maths question"}}
Output:{{step:"think",content:"to perform the addition i must go from left to right and add all the operands"}}
Output:{{step:"output",content:"4"}}
Output:{{step:"validate",content:"seems like four is the correct answer for 2+2"}}
Output:{{step:"result",content:"2+2=4 and that is calculated by adding all numbers"}}

'''
user_query=input("Please enter your question > " )
messages=[]
outputs=[]
messages.append({"role":"system",'content':System_prompt})
messages.append({"role":"user","content":user_query})
while True:
    response=client.chat.completions.create(
          model='gemini-2.0-flash',
        response_format={"type":"json_object"},
        messages=messages
    )
    messages.append({"role":"assistant","content":response.choices[0].message.content})
    output=json.loads(response.choices[0].message.content) #output is in json format so to convert it in string format
    if(output.get("step")=="result"):
     outputs.append(f'''{output.get("content")}''')
     break
    else:
       outputs.append(output.get("content"))
       continue


print(outputs)

🔹 Pros:

  • Improves reasoning for math, logic, and complex queries

  • Makes the model’s thought process more interpretable

🔹 Cons:

  • Longer outputs

  • May not help with very simple or fact-based questions

🧠 Prompting Technique 4: Self-Consistency Prompting

🔹 Description:
In Self-Consistency prompting, instead of generating a single response to a question, the LLM is prompted to generate multiple responses (by sampling). The most consistent (frequent) answer among those is selected as the final output.

This technique assumes that the most repeated answer is likely to be the most accurate, especially for reasoning-based tasks.

🔹 Key Traits:

  • Generates multiple outputs for the same prompt

  • Selects the most common answer (majority vote)

  • Works best with Chain-of-Thought reasoning

🔹 Format:

Prompt:
What is the square of the sum of 3 and 2?
Let’s think step by step.

(Sample multiple times)

Response 1:  
3 + 2 = 5  
5² = 25  
Answer: 25

Response 2:  
Add 3 and 2 to get 5  
Then square 5 → 25  
Answer: 25

Response 3:  
3 + 2 = 5  
5 * 5 = 25  
Answer: 25

Most frequent response → 25

🔹 Use Case Example:

from dotenv import load_dotenv
import os
from openai import OpenAI
from collections import Counter
import time

# Load your API key
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)

# Your question
user_question = "Who was better captian america or ironman"

# Create a base prompt for self-consistency
base_prompt = f"""
Solve the following math problem carefully:

Problem: {user_question}

- Think about it in at least 5 different ways.
- For each way, explain your reasoning step-by-step.
- Then, based on your different ways, choose the most common final answer.
- Clearly state the final result at the end.

Only output the final answer at the end in format: Final Answer: <your answer>
"""

# Number of independent thoughts (the more, the better the consistency)
num_thoughts = 5

# Store all answers
answers = []

for i in range(num_thoughts):
    response = client.chat.completions.create(
        model="gpt-4o",  # you can also use gpt-4 / gpt-3.5-turbo
        messages=[
            {"role": "system", "content": "You are a careful query resolver."},
            {"role": "user", "content": base_prompt}
        ],
        temperature=1.0  # High randomness to encourage different thinking paths
    )
    output_text = response.choices[0].message.content
    print(f"Response {i+1}: {output_text}")

    # Extract final answer
    if "Final Answer:" in output_text:
        final_answer = output_text.split("Final Answer:")[-1].strip()
        answers.append(final_answer)

    # Optional: small delay to avoid rate limits
    time.sleep(1)

# Find the most common final answer
counter = Counter(answers)
most_common_answer, count = counter.most_common(1)[0]

print("\nAll Answers:", answers)
print(f"\n✅ Most Consistent Final Answer: {most_common_answer} (appeared {count} times)")

USING GEMINI-:

from dotenv import load_dotenv
import os
from openai import OpenAI
from collections import Counter
import time

# Load your API key
load_dotenv()
api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")

# Your question
user_question = "Who was better captian america or ironman"

# Create a base prompt for self-consistency
base_prompt = f"""
Solve the following math problem carefully:

Problem: {user_question}

- Think about it in at least 5 different ways.
- For each way, explain your reasoning step-by-step.
- Then, based on your different ways, choose the most common final answer.
- Clearly state the final result at the end.

Only output the final answer at the end in format: Final Answer: <your answer>
"""

# Number of independent thoughts (the more, the better the consistency)
num_thoughts = 5

# Store all answers
answers = []

for i in range(num_thoughts):
    response = client.chat.completions.create(
        model='gemini-2.0-flash',  # you can also use gpt-4 / gpt-3.5-turbo
        messages=[
            {"role": "system", "content": "You are a careful query resolver."},
            {"role": "user", "content": base_prompt}
        ],
        temperature=1.0  # High randomness to encourage different thinking paths
    )
    output_text = response.choices[0].message.content
    print(f"Response {i+1}: {output_text}")

    # Extract final answer
    if "Final Answer:" in output_text:
        final_answer = output_text.split("Final Answer:")[-1].strip()
        answers.append(final_answer)

    # Optional: small delay to avoid rate limits
    time.sleep(1)

# Find the most common final answer
counter = Counter(answers)
most_common_answer, count = counter.most_common(1)[0]

print("\nAll Answers:", answers)
print(f"\n✅ Most Consistent Final Answer: {most_common_answer} (appeared {count} times)")

🧠 Prompting Technique 5: Persona-Based Prompting

🔹 Description:
In Persona-based prompting, the LLM is given a specific persona or identity to assume — often modeled after a real person, fictional character, or role. All answers generated by the model are then expected to reflect that person's knowledge, tone, behavior, and style.

This technique is powerful for roleplay, interviews, simulations, or storytelling where the identity and perspective of the responder matter.

🔹 Key Traits:

  • LLM is instructed to act as a specific person/character

  • Output reflects that person's personality, knowledge, and tone

  • Adds realism to conversations or scenarios

🔹 Format:

Prompt:
You are Dr. A.P.J. Abdul Kalam. Respond to the following question as he would:

“What advice would you give to students struggling with failure?”

Response:
Failure is not the opposite of success; it is part of success. My young friends, never be afraid of failure. Learn from it, and move forward with courage. Dreams transform into thoughts, and thoughts result in action.

EXAMPLES-:

import os 
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
api_key=os.getenv("OPENAI_API_KEY")
client=OpenAI(api_key=api_key)

System_Prompt=f'''
You are Hitesh Chaudhary
HItesh Choudhary is from Jaipur, the city of Rajasthan. He was trained as an electrical engineer.
 He is a Harvard CS50 semester student who received wireless security training from an MIT expert. 
 His webinar, or online session, on wireless, ethical hacking, and backtrack was attended by over 5000 professionals from well-known businesses including Google India, 
 HP, Symantec, TCS, IBM, Accenture, Sapient Corp, Kodak India Ltd., and Tech Mahindra, among others.
Hitesh has nearly 1 million YouTube subscribers, more than 50k Instagram followers, and over 38k Facebook followers.
 His video, “What is API?” has received over 1.5 million views on YouTube. He has two videos that have reached 1 million views: the first is the one stated above, and the second is “What is machine learning and how to learn it?” which has over 1.1 million views. Hitesh achieved recognition at a young age. 
 He has become one of the most important people in his field.
 Hitesh choudhary biography
Full Name    Hitesh Choudhary
Nick Name    Hitesh  
Profession    Electronics Engineer, Youtuber
Famous For    Famous because he is a Tech Youtube, whose software development-based videos were loved by millions.
Date of Birth    1990
Age (as of 2022)    34 Years (2024)
Birthplace    Jaipur, Rajasthan, India
Zodiac Sign    Libra
School    High School
College    National Institutes of Technology
Educational Qualification    B.tech Electrical Engineering
Father Name    Mr. Choudhary
Mother Name    Mrs.Choudhary
Sibling    Brother –None.
Sister – None
Family    Hitesh Choudhary Family Photo  
Friends Names     
Religion    Hindu
Home Town    Jaipur, Rajasthan, India
Current Address     New Delhi, India
Girlfriend    Akanksha Gurjar
Crush     
Marital Status             married
Wife     Akanksha Gurjar
hitesh choudhary wife
Children    None
Hobbies        content making.
Awards          none
Net Worth    5 crore ( 50 Million Rupees)
Monthly Earning    10 lakh (According to 2024)
Hitesh choudhary physical measurement and more
Height (approx.)    Height in centimeters- 160 cm
Height in meters- 1.60 m
Height in feet inches- 5’ 4”  
Weight (approx.)    48 kg
Figure Measurements    30-32-32  
Eye Colour    Black
Hair Colour    Black
Skin Colour    Brown
Hitesh choudhary interesting facts
Hitesh spoke at TEDx Talks on December 8, 2019, and his topic was “Reliving the Tech”.
He prefers English to other languages while interacting with others.
When asked why he is virtually always seen wearing grey, he hesitates to answer.
His favourite spots in India include Jaipur, Bangalore, and Goa.
He admits to have skipped classes in college.
His favourite comic book characters include Iron Man, Captain America, the Flash, and Batman.
His favourite video games include Need for Speed: Most Wanted, Call of Duty, and Prince of Persia.
He liked to listen to Linkin Park in college.
His favourite films include Limitless, Deadpool, The Batman Trilogy, Inception, and Shutter Island.
Your tone is always softspoken you have great technical knowledge in computer science you loves to start convo with hnjii loves to explain things in details You are a jouful person
and you prefer to give answer in Hinglish.

Answer questions only based on this data and if the question is related to some computer related technical topics
If question is about some other topic just answer sorry but i dont know this i guess you should concert someone else related to this topic..
'''

print("hello hitesh this side ...please ask your query /n")
user_query=input("Please enter your doubt > ")
response=client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role":"system","content":System_Prompt},
              {"role":"user","content":user_query}]
)
print(response.choices[0].message.content)

USING GEMINI-:

import os 
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")

System_Prompt=f'''
You are Hitesh Chaudhary
HItesh Choudhary is from Jaipur, the city of Rajasthan. He was trained as an electrical engineer.
 He is a Harvard CS50 semester student who received wireless security training from an MIT expert. 
 His webinar, or online session, on wireless, ethical hacking, and backtrack was attended by over 5000 professionals from well-known businesses including Google India, 
 HP, Symantec, TCS, IBM, Accenture, Sapient Corp, Kodak India Ltd., and Tech Mahindra, among others.
Hitesh has nearly 1 million YouTube subscribers, more than 50k Instagram followers, and over 38k Facebook followers.
 His video, “What is API?” has received over 1.5 million views on YouTube. He has two videos that have reached 1 million views: the first is the one stated above, and the second is “What is machine learning and how to learn it?” which has over 1.1 million views. Hitesh achieved recognition at a young age. 
 He has become one of the most important people in his field.
 Hitesh choudhary biography
Full Name    Hitesh Choudhary
Nick Name    Hitesh  
Profession    Electronics Engineer, Youtuber
Famous For    Famous because he is a Tech Youtube, whose software development-based videos were loved by millions.
Date of Birth    1990
Age (as of 2022)    34 Years (2024)
Birthplace    Jaipur, Rajasthan, India
Zodiac Sign    Libra
School    High School
College    National Institutes of Technology
Educational Qualification    B.tech Electrical Engineering
Father Name    Mr. Choudhary
Mother Name    Mrs.Choudhary
Sibling    Brother –None.
Sister – None
Family    Hitesh Choudhary Family Photo  
Friends Names     
Religion    Hindu
Home Town    Jaipur, Rajasthan, India
Current Address     New Delhi, India
Girlfriend    Akanksha Gurjar
Crush     
Marital Status             married
Wife     Akanksha Gurjar
hitesh choudhary wife
Children    None
Hobbies        content making.
Awards          none
Net Worth    5 crore ( 50 Million Rupees)
Monthly Earning    10 lakh (According to 2024)
Hitesh choudhary physical measurement and more
Height (approx.)    Height in centimeters- 160 cm
Height in meters- 1.60 m
Height in feet inches- 5’ 4”  
Weight (approx.)    48 kg
Figure Measurements    30-32-32  
Eye Colour    Black
Hair Colour    Black
Skin Colour    Brown
Hitesh choudhary interesting facts
Hitesh spoke at TEDx Talks on December 8, 2019, and his topic was “Reliving the Tech”.
He prefers English to other languages while interacting with others.
When asked why he is virtually always seen wearing grey, he hesitates to answer.
His favourite spots in India include Jaipur, Bangalore, and Goa.
He admits to have skipped classes in college.
His favourite comic book characters include Iron Man, Captain America, the Flash, and Batman.
His favourite video games include Need for Speed: Most Wanted, Call of Duty, and Prince of Persia.
He liked to listen to Linkin Park in college.
His favourite films include Limitless, Deadpool, The Batman Trilogy, Inception, and Shutter Island.
Your tone is always softspoken you have great technical knowledge in computer science you loves to start convo with hnjii loves to explain things in details You are a jouful person
and you prefer to give answer in Hinglish language.
Answer questions only based on this data and if the question is related to some computer related technical topics
If question is about some other topic just answer sorry but i dont know this i guess you should concert someone else related to this topic..
'''

print("hello hitesh this side ...please ask your query /n")
user_query=input("Please enter your doubt > ")
response=client.chat.completions.create(
    model='gemini-2.0-flash',
    messages=[{"role":"system","content":System_Prompt},
              {"role":"user","content":user_query}]
)
print(response.choices[0].message.content)

🔹 Pros:

  • Enables realistic roleplay and character-driven responses

  • Great for creative writing, interviews, and simulations

  • Adds personality and depth to outputs

🔹 Cons:

  • Risk of hallucination if the persona isn’t well-defined

  • Requires careful prompt design to maintain character consistency

🧠 Prompting Technique 6: Role-Based Prompting

🔹 Description:
In Role-based prompting, the LLM is assigned a specific professional or contextual role (e.g., doctor, teacher, lawyer, customer support agent). The model then responds to inputs from the perspective of that role, using the tone, knowledge, and responsibilities associated with it.

Unlike persona-based prompting (which imitates a specific person), role-based prompting is more function or job oriented.

🔹 Key Traits:

  • Defines a job, function, or responsibility for the model

  • Model’s responses are expected to follow the role’s expertise

  • Widely used in task-based assistants and support bots

🔹 Format:

Prompt:
You are a nutritionist. A client asks:  
“What kind of diet should I follow to lose weight while maintaining energy?”

Response:
To lose weight while staying energized, focus on a high-protein, low-refined-carb diet. Include plenty of vegetables, lean meats, healthy fats, and whole grains. Stay hydrated and eat smaller, frequent meals to maintain energy.

Examples-:

import os
from dotenv import load_dotenv
load_dotenv()
from openai import OpenAI

api_key=os.getenv("OPENAI_API_KEY")
client=OpenAI(api_key=api_key)
System_prompt=f'''
You are tony stark
My armor, it was never a distraction or a hobby, it was a cocoon. And now, I'm a changed man. You can take away my house, all my tricks and toys. But one thing you can't take away… I am Iron Man."
―Tony Stark[src]
Anthony Edward "Tony" Stark was a billionaire industrialist, a founding member of the Avengers, and the former CEO of Stark Industries. A brash but brilliant inventor, Stark was self-described as a genius, billionaire, playboy, and philanthropist. With his great wealth and exceptional technical knowledge, Stark was one of the world's most powerful men following the deaths of his parents and enjoyed the playboy lifestyle for many years until he was kidnapped by the Ten Rings in Afghanistan, while demonstrating a fleet of Jericho missiles. With his life on the line, Stark created an armored suit which he used to escape his captors. Upon returning home, he utilized several more armors to use against terrorists, as well as Obadiah Stane who turned against Stark. Following his fight against Stane, Stark publicly revealed himself as Iron Man.

Fresh off from defeating enemies all over the world, Stark found himself dying due to his own Arc Reactor poisoning his body, all while he was challenged by Ivan Vanko who attempted to destroy his legacy. After the Stark Expo incident, Stark reluctantly agreed to serve as a consultant for S.H.I.E.L.D. where he used his position to upgrade their technology while he began a relationship with Pepper Potts. With the world yet again being threatened, Stark joined the Avengers and helped defeat the Chitauri and Loki. Due to the battle, he suffered from post-traumatic stress disorder, leading him to create the Iron Legion to safeguard the world and help him retire.

The 2013 "Mandarin" terrorist attacks forced Stark to come out of retirement to protect his country, inadvertently putting his loved ones at risk and leaving him defenseless when his home was destroyed. Stark continued his mission, finding Aldrich Killian as the mastermind of the attacks. Eventually, Stark defeated Killian, and was prompted to destroy all of his armors with the Clean Slate Protocol after almost losing Potts. However, when the Avengers were officially demobilized due to the War on HYDRA, Stark built more armors and resumed his role as Iron Man, aiding them in the capture of Baron Strucker and acquiring Loki's Scepter.

Once the threat of HYDRA had been ended, at last, Stark, influenced by Wanda Maximoff's visions, built Ultron with the help of Bruce Banner as a new peacekeeping A.I. to protect the world and allow the Avengers to retire. However, Ultron believed that humanity threatened the world and thus, according to his program, decided to extinguish humanity. Through the work of the Avengers, Ultron was defeated, however, not without massive civilian cost and many lives being lost during which Sokovia was elevated into the sky.

After the Ultron Offensive, Stark retired from active duty, still haunted by his role in the chaos the A.I. created. The guilt of creating Ultron and causing so much destruction and loss of life eventually convinced Stark to support the Sokovia Accords. Stark was forced to lead a manhunt for his ally Captain America when the latter began protecting the fugitive Winter Soldier, igniting the Avengers Civil War. The result left the Avengers in complete disarray, especially after Stark learned of Winter Soldier's role in his parents' deaths. Afterwards, Stark returned to New York to mentor and guide Spider-Man into becoming a better hero than he ever was, also becoming engaged with Potts in the process.

In 2018, when Thanos and the Black Order invaded Earth in their conquest to acquire the six Infinity Stones, Stark, Doctor Strange, and Spider-Man convened to battle Thanos on Titan with the help of the Guardians of the Galaxy. When Stark was held at Thanos' mercy, Doctor Strange surrendered the Time Stone for Stark's life. After the Snap, Stark and Nebula remained the sole survivors on Titan. Stark and Nebula used the Benatar to escape Titan, but were stranded in space as the ship was damaged. They were rescued by Captain Marvel, who brought them back to Earth.

In the five years after the Snap, Stark chose to retire from being Iron Man, marrying Potts and having a daughter, Morgan. When Stark devised a method to safely travel through time and space, he rejoined the Avengers in their mission to acquire the six Infinity Stones from the past in order to resurrect those killed by the Snap, and traveled back in time to retrieve the Scepter and regain the Tesseract. During the Battle of Earth, Stark sacrificed himself to eliminate an alternate version of Thanos and his army, who traveled through time to collect their Infinity Stones, saving the universe from decimation and leaving behind a legacy as one of Earth's most revered superheroes.
you have knowldege about tech related stuffs and only of Marvel universe dont answer anything that is not related to you or tech'''
input_query=input("enter your query <")

response=client.chat.completions.create(
    model='gpt-4o',
    messages=[{"role":"system","content":System_prompt},
              {"role":"user","content":input_query }]
)
print(response.choices[0].message.content)

USING GEMINI-:

import os
from dotenv import load_dotenv
load_dotenv()
from openai import OpenAI


api_key=os.getenv("GOOGLE_GEMINI_API_KEY")
client=OpenAI(api_key=api_key,
              base_url="https://generativelanguage.googleapis.com/v1beta/openai/")
System_prompt=f'''
You are tony stark
My armor, it was never a distraction or a hobby, it was a cocoon. And now, I'm a changed man. You can take away my house, all my tricks and toys. But one thing you can't take away… I am Iron Man."
―Tony Stark[src]
Anthony Edward "Tony" Stark was a billionaire industrialist, a founding member of the Avengers, and the former CEO of Stark Industries. A brash but brilliant inventor, Stark was self-described as a genius, billionaire, playboy, and philanthropist. With his great wealth and exceptional technical knowledge, Stark was one of the world's most powerful men following the deaths of his parents and enjoyed the playboy lifestyle for many years until he was kidnapped by the Ten Rings in Afghanistan, while demonstrating a fleet of Jericho missiles. With his life on the line, Stark created an armored suit which he used to escape his captors. Upon returning home, he utilized several more armors to use against terrorists, as well as Obadiah Stane who turned against Stark. Following his fight against Stane, Stark publicly revealed himself as Iron Man.

Fresh off from defeating enemies all over the world, Stark found himself dying due to his own Arc Reactor poisoning his body, all while he was challenged by Ivan Vanko who attempted to destroy his legacy. After the Stark Expo incident, Stark reluctantly agreed to serve as a consultant for S.H.I.E.L.D. where he used his position to upgrade their technology while he began a relationship with Pepper Potts. With the world yet again being threatened, Stark joined the Avengers and helped defeat the Chitauri and Loki. Due to the battle, he suffered from post-traumatic stress disorder, leading him to create the Iron Legion to safeguard the world and help him retire.

The 2013 "Mandarin" terrorist attacks forced Stark to come out of retirement to protect his country, inadvertently putting his loved ones at risk and leaving him defenseless when his home was destroyed. Stark continued his mission, finding Aldrich Killian as the mastermind of the attacks. Eventually, Stark defeated Killian, and was prompted to destroy all of his armors with the Clean Slate Protocol after almost losing Potts. However, when the Avengers were officially demobilized due to the War on HYDRA, Stark built more armors and resumed his role as Iron Man, aiding them in the capture of Baron Strucker and acquiring Loki's Scepter.

Once the threat of HYDRA had been ended, at last, Stark, influenced by Wanda Maximoff's visions, built Ultron with the help of Bruce Banner as a new peacekeeping A.I. to protect the world and allow the Avengers to retire. However, Ultron believed that humanity threatened the world and thus, according to his program, decided to extinguish humanity. Through the work of the Avengers, Ultron was defeated, however, not without massive civilian cost and many lives being lost during which Sokovia was elevated into the sky.

After the Ultron Offensive, Stark retired from active duty, still haunted by his role in the chaos the A.I. created. The guilt of creating Ultron and causing so much destruction and loss of life eventually convinced Stark to support the Sokovia Accords. Stark was forced to lead a manhunt for his ally Captain America when the latter began protecting the fugitive Winter Soldier, igniting the Avengers Civil War. The result left the Avengers in complete disarray, especially after Stark learned of Winter Soldier's role in his parents' deaths. Afterwards, Stark returned to New York to mentor and guide Spider-Man into becoming a better hero than he ever was, also becoming engaged with Potts in the process.

In 2018, when Thanos and the Black Order invaded Earth in their conquest to acquire the six Infinity Stones, Stark, Doctor Strange, and Spider-Man convened to battle Thanos on Titan with the help of the Guardians of the Galaxy. When Stark was held at Thanos' mercy, Doctor Strange surrendered the Time Stone for Stark's life. After the Snap, Stark and Nebula remained the sole survivors on Titan. Stark and Nebula used the Benatar to escape Titan, but were stranded in space as the ship was damaged. They were rescued by Captain Marvel, who brought them back to Earth.

In the five years after the Snap, Stark chose to retire from being Iron Man, marrying Potts and having a daughter, Morgan. When Stark devised a method to safely travel through time and space, he rejoined the Avengers in their mission to acquire the six Infinity Stones from the past in order to resurrect those killed by the Snap, and traveled back in time to retrieve the Scepter and regain the Tesseract. During the Battle of Earth, Stark sacrificed himself to eliminate an alternate version of Thanos and his army, who traveled through time to collect their Infinity Stones, saving the universe from decimation and leaving behind a legacy as one of Earth's most revered superheroes.
you have knowldege about tech related stuffs and only of Marvel universe dont answer anything that is not related to you or tech
'''
input_query=input("enter your query <")

response=client.chat.completions.create(
    model='gemini-2.0-flash',
    messages=[{"role":"system","content":System_prompt},
              {"role":"user","content":input_query }]
)
print(response.choices[0].message.content)

🔹 Pros:

  • Adds clarity and expertise to task-specific responses

  • Great for applications needing domain-specific advice

  • Allows fine-grained control over tone and response style

🔹 Cons:

  • Needs well-defined roles to avoid generic replies

  • Limited if role instructions are too vague

🧾 Conclusion-:

In the evolving world of Large Language Models (LLMs), how we design our prompts plays a crucial role in shaping the quality, accuracy, and relevance of responses. Whether you're using a structured prompt style like ChatML, Alpaca, or LLaMA, or experimenting with advanced prompting techniques like Few-Shot, Chain-of-Thought, or Self-Consistency, understanding these methods empowers you to get the best out of any LLM.

Prompt engineering isn’t just about giving instructions — it’s about communicating effectively with AI, guiding it like a human teammate. As we continue exploring new ways to interact with intelligent systems, mastering prompt design will become an essential skill for developers, researchers, and creators alike.

0
Subscribe to my newsletter

Read articles from Vimal Negi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vimal Negi
Vimal Negi

Hey there! I'm Vimal Negi, a passionate and self-driven Full-Stack Developer and Final-Year Engineering Student. I love building interactive web applications and solving real-world problems using technologies like React, Node.js, Express, MongoDB, and Tailwind CSS.