Prompting in OpenAI SDK

ajayajay
7 min read

Prompting means giving instructions or inputs to the AI to get a desired output. Think of it like asking a question or giving a task to the AI.

1. Zero-shot Prompting

🔹 Definition:

Zero-shot Prompting: The model is given a direct question or task without prior examples.

🔹 Detailed Explanation:

In zero-shot prompting, you don’t show the model how to solve the problem — you just tell it what you want. The model relies on its pre-trained knowledge to understand your intent and generate a response.

✅ Best for:

  • Simple questions

  • Language translation

  • Summarization

  • Quick data extraction


🔹 Real-Time Example:

Task: Summarize the paragraph in one sentence.

Prompt:

"Summarize this: OpenAI has developed advanced AI models that can understand and generate human-like language, helping developers create smarter applications."

🔹 Ready-to-Run Code:

import openai
openai.api_key = "your-api-key"

def zero_shot_prompt():
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "user", "content": "Summarize this: OpenAI has developed advanced AI models that can understand and generate human-like language, helping developers create smarter applications."}
        ]
    )
    print("Zero-shot Response:\n", response['choices'][0]['message']['content'])

zero_shot_prompt()

2. Few-shot Prompting

🔹 Definition:

Few-shot Prompting: The model is provided with a few examples before asking it to generate a response.

🔹 Detailed Explanation:

Here, you give 2–5 examples in the prompt. This helps the model understand the format, style, or logic of what you expect. It’s like teaching by example.

✅ Best for:

  • Custom formatting

  • Conversions (currency, date formats)

  • Custom Q&A or grammar correction


🔹 Real-Time Example:

Task: Convert temperatures from Celsius to Fahrenheit.

Prompt:

yamlCopyEditConvert Celsius to Fahrenheit:
C: 0 → F: 32
C: 10 → F: 50
C: 25 → F:

🔹 Code:

def few_shot_prompt():
    prompt = """Convert Celsius to Fahrenheit:
C: 0 → F: 32
C: 10 → F: 50
C: 25 → F:"""

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    print("Few-shot Response:\n", response['choices'][0]['message']['content'])

few_shot_prompt()

3. Chain-of-Thought (CoT) Prompting

🔹 Definition:

Chain-of-Thought Prompting: The model is encouraged to break down reasoning step by step before arriving at an answer.

🔹 Detailed Explanation:

Instead of just giving an answer, the model explains its thinking process step by step. This helps in logical or mathematical problems and improves accuracy.

✅ Best for:

  • Math problems

  • Logical reasoning

  • Code analysis


🔹 Real-Time Example:

Task: Solve a math word problem step-by-step.

Prompt:

"If a train travels 60 km in 1.5 hours, what is its speed? Think step by step."

🔹 Code:

def cot_prompt():
    prompt = "If a train travels 60 km in 1.5 hours, what is its speed? Think step by step."

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    print("Chain-of-Thought Response:\n", response['choices'][0]['message']['content'])

cot_prompt()

4. Self-Consistency Prompting

🔹 Definition:

Self-Consistency: The model generates multiple answers and selects the most consistent one (majority vote).

🔹 Detailed Explanation:

Instead of taking the first answer, you sample multiple outputs (e.g., 5), and then choose the most common or logical one. This reduces randomness and improves reasoning reliability.

✅ Best for:

  • Critical tasks

  • Logic-heavy answers

  • Higher confidence output


🔹 Real-Time Example:

Same as CoT, but get 5 outputs and choose the best.

🔹 Code:

def self_consistency_prompt():
    prompt = "If a train travels 60 km in 1.5 hours, what is its speed? Think step by step."
    results = []

    for _ in range(5):
        res = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            temperature=1.0,
            messages=[{"role": "user", "content": prompt}]
        )
        results.append(res['choices'][0]['message']['content'])

    print("Self-Consistency Responses:")
    for i, ans in enumerate(results, 1):
        print(f"\nAttempt {i}:\n{ans.strip()}")

self_consistency_prompt()

5. Instruction Prompting

🔹 Definition:

Instruction Prompting: The model is explicitly instructed to follow a particular format or guideline.

🔹 Detailed Explanation:

You give clear commands like “Write a summary in bullet points” or “Reply only in JSON”. The model will follow those instructions closely.

✅ Best for:

  • Structured outputs

  • APIs or automation

  • Custom formats


🔹 Real-Time Example:

Task: Describe Apple Inc. in bullet points.

Prompt:

"List key facts about Apple Inc. in bullet points."

🔹 Code:

def instruction_prompt():
    prompt = "List key facts about Apple Inc. in bullet points."

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    print("Instruction Response:\n", response['choices'][0]['message']['content'])

instruction_prompt()

6. Direct Answer Prompting

🔹 Definition:

Direct Answer Prompting: The model is asked to give a concise and direct response without explanation.

🔹 Detailed Explanation:

Great for chatbots, APIs, or command-line tools where you just want the answer — no fluff.

✅ Best for:

  • Yes/No answers

  • API returns

  • Quick data


🔹 Real-Time Example:

Prompt:

"What is the capital of Germany? Answer only."

🔹 Code:

def direct_answer_prompt():
    prompt = "What is the capital of Germany? Answer only."

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    print("Direct Answer:\n", response['choices'][0]['message']['content'])

direct_answer_prompt()

7. Persona-based Prompting

🔹 Definition:

Persona-based Prompting: The model is instructed to respond as if it were a particular character or professional.

🔹 Detailed Explanation:

You can make the model act like a doctor, teacher, poet, or even a movie character. It adjusts tone, vocabulary, and attitude accordingly.

✅ Best for:

  • Character simulations

  • Creative writing

  • Expert-style responses


🔹 Real-Time Example:

Prompt:

"You are a nutritionist. Suggest a healthy breakfast for weight loss."

🔹 Code:

def persona_prompt():
    messages = [
        {"role": "system", "content": "You are a certified nutritionist."},
        {"role": "user", "content": "Suggest a healthy breakfast for weight loss."}
    ]

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages
    )
    print("Persona-based Response:\n", response['choices'][0]['message']['content'])

persona_prompt()

8. Role-Playing Prompting

🔹 Definition:

Role-Playing Prompting: The model assumes a specific role and interacts accordingly.

🔹 Detailed Explanation:

It’s similar to persona-based but more interactive. You can create simulations like “You are a job interviewer” and conduct a mock interview.

✅ Best for:

  • Training bots

  • Scenario simulation

  • Education & entertainment


🔹 Real-Time Example:

Prompt:

"You are an interviewer. Ask me 3 questions for a software engineer position."

🔹 Code:

def role_playing_prompt():
    messages = [
        {"role": "system", "content": "You are a job interviewer."},
        {"role": "user", "content": "Ask me 3 questions for a software engineer position."}
    ]

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages
    )
    print("Role-Playing Response:\n", response['choices'][0]['message']['content'])

role_playing_prompt()

9. Contextual Prompting

🔹 Definition:

Contextual Prompting: The prompt includes background information to improve response quality.

🔹 Detailed Explanation:

You provide extra context (past conversation, company details, etc.) so the model gives relevant, coherent answers.

✅ Best for:

  • Long conversations

  • Business apps

  • Chatbots


🔹 Real-Time Example:

Prompt:

"Company: EcoTech. Product: Solar Chargers. Task: Write a one-line pitch for investors."

🔹 Code:

def contextual_prompt():
    prompt = "Company: EcoTech. Product: Solar Chargers. Task: Write a one-line pitch for investors."

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    print("Contextual Response:\n", response['choices'][0]['message']['content'])

contextual_prompt()

10. Multimodal Prompting

🔹 Definition:

Multimodal Prompting: The model is given a combination of text, images, or other modalities to generate a response.

🔹 Detailed Explanation:

Works with GPT-4-Vision. You can upload an image + add a prompt like “Describe this picture” or “What’s in this chart?”

✅ Best for:

  • Image captions

  • Visual understanding

  • Diagrams, screenshots


🔹 Real-Time Example (text + image description):

⚠️ This requires an image and GPT-4 Vision model.

import base64

def multimodal_prompt():
    # Load image and encode it to base64
    with open("solar_panel.jpg", "rb") as image_file:
        base64_image = base64.b64encode(image_file.read()).decode("utf-8")

    response = openai.ChatCompletion.create(
        model="gpt-4-vision-preview",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": "Describe the image in detail."},
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": f"data:image/jpeg;base64,{base64_image}"
                        }
                    }
                ]
            }
        ]
    )

    print("Multimodal Response:\n", response['choices'][0]['message']['content'])

# multimodal_prompt()  # Uncomment after setting up GPT-4-Vision access and image

✅ Summary Table

TypeUse CaseStrength
Zero-shotDirect questionFast & simple
Few-shotFormat-specific answersLearn by examples
CoTReasoning/mathsStep-by-step thinking
Self-ConsistencyComplex logicMore accurate
InstructionFormat controlPrecise output
Direct AnswerShort repliesChatbots, APIs
Persona-basedExpert adviceCustom tone
Role-PlayingSimulationsFun & realistic
ContextualBusiness/chatbotsDeep understanding
MultimodalImages + textVisual tasks
0
Subscribe to my newsletter

Read articles from ajay directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

ajay
ajay