Speaking Code to Machines: The Art of Prompt Engineering

Introduction

Prompt engineering is not just about using AI—it’s about communicating with it. It’s the art and science of asking better questions using clear, precise, and contextual prompts. It’s about learning the language that AI models understand, and using that language to extract high-quality, accurate, and relevant answers.

“The quality of a machine’s answer is only as good as the quality of your question.”

In this blog, we’ll explore what prompt engineering is, why it matters in computer science and beyond, how to master it through five core principles, and how to evaluate and improve your prompts. We’ll end with practical examples that demonstrate the process.


What Is Prompt Engineering?

Prompt engineering is the process of crafting well-structured input to AI systems so that the output is coherent, relevant, and useful. Think of it as coding in natural language: you are giving structured commands, but in a form that combines linguistics, logic, and intent.

Prompt engineering is not just about getting answers. It’s about knowing how to ask in a way that saves time, increases accuracy, and helps you reflect more deeply on the actual task.

“Mastering prompt engineering means mastering your own understanding of the problem.”


Why Is Prompt Engineering Important?

In a world where AI can generate code, write essays, summarize research, and even create music, your ability to direct it effectively becomes your competitive edge.

  • It saves time

  • It increases efficiency and relevance

  • It forces clarity of thought

  • It deepens your understanding of your own problem

“Prompt engineering is not about automation—it’s about amplification.”

When done right, prompt engineering is like setting a GPS: you define your destination, share your current position, provide alternate routes, and then evaluate the best way forward.


The Five Core Principles of Prompt Engineering (T.C.R.E.I)

Here’s the framework:

  1. T – Task

  2. C – Context

  3. R – Reference

  4. E – Evaluate

  5. I – Iterate

Let’s go through them with coding examples.


1. Task – What are you trying to do?

Define your task clearly. Be precise. Avoid vagueness.

✅ Good Example:
“Write a Python function to check if a string is a palindrome.”

❌ Bad Example:
“Help me with Python.”


2. Context – Where is this task taking place?

Provide details about your working environment or limitations.

✅ Example:
“I’m working in Python 3.10 using a Jupyter Notebook. The input will be a lowercase string without punctuation.”

This helps the AI tailor its response appropriately.


3. Reference – What patterns or formats should be followed?

Give examples or constraints based on past experiences, documentation, or preferences.

✅ Example:
“Use list comprehensions and avoid using reversed() function.”

This tells the model how to structure its response.


4. Evaluate – Did the response fulfill the task?

After receiving an answer, don’t stop there. Evaluate:

  • Does the code run?

  • Is the logic sound?

  • Does it follow your context and constraints?

If not, don’t just accept the response. Move to iteration.


5. Iterate – Refine your prompt if needed

You may need to adjust wording, restructure the sentence, break the prompt into smaller steps, or add constraints.

✅ Example:
“Now add error handling for empty input.”

Or even:

“Split this into two separate functions—one for validation and one for checking palindromes.”


Two End-to-End Prompt Examples

🌟 Example 1 – Beginner

Prompt:
“Write a Python function that accepts a string and checks if it’s a palindrome. Assume no punctuation or spaces. Use Python 3.10 and avoid using the reversed() function. Then give test cases.”

AI Output: ✅ Clear, relevant, and easy to evaluate.

Evaluation:

  • Task: ✅ Check for palindrome

  • Context: ✅ Python 3.10, no spaces/punctuation

  • Reference: ✅ Avoid reversed()

  • Evaluate: Code runs and gives expected output

  • Iterate: May ask for optimization using recursion


🌟 Example 2 – Intermediate/Developer

Prompt:
“Generate a SQL query to retrieve the top 5 users who made the most purchases last month from a PostgreSQL database. The table is called ‘orders’ and has columns: user_id, order_date, amount. Ignore failed transactions.”

Follow-Up Prompt:
“Now optimize the query using a CTE and filter only for users who spent over $500.”

Breakdown:

  • Task: Retrieve and rank top users

  • Context: PostgreSQL, columns specified

  • Reference: CTE usage and filtering

  • Evaluate: Run the query and verify

  • Iterate: Add additional filter, e.g., region or age group


Final Thought

Prompt engineering is not magic—it’s structure. It rewards clarity and punishes vagueness.

“AI reflects the structure of your thinking. If your thinking is messy, so is the output.”

Be intentional. Use the TCREI framework:

  • Define the Task

  • Provide Context

  • Give Reference

  • Evaluate the results

  • Iterate until satisfied

Prompting well is thinking well. And the better you think, the better AI responds.

0
Subscribe to my newsletter

Read articles from Ray Mcmillan Gumbo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ray Mcmillan Gumbo
Ray Mcmillan Gumbo

A deep thinker, builder, and learner sharing my journey through tech and thought. This blog is my space to reflect, explore hard questions, and document growth — not just in skills, but in purpose. It’s for anyone who feels lost, curious, or stuck — a reminder that your voice, ideas, and path still matter. Here, I write before ideas become products and code becomes real — the foundation behind Ryom and the questions that drive it.