From Prompts to Context with GitHub Copilot

Jorge CastilloJorge Castillo
5 min read

In the past few weeks, I've embarked on an exciting journey that reshaped how I use AI in software development. Initially, my focus was on prompt engineering, a widely discussed technique where carefully crafted prompts help AI models produce better code or explanations. Yet, as I spent more time experimenting, I realized something crucial: a prompt alone was rarely enough. The models needed more context—much more.

Then came the "aha" moment: I stumbled upon a practice known as Context Engineering. It turns out the methods I had organically developed aligned closely with a structured and widely adopted practice already documented by the AI community.

What Exactly is Context Engineering?

Simply put, Context Engineering is the discipline of strategically selecting, compressing, and structuring information to enhance an AI model’s comprehension and outputs. Phil Schmid puts it elegantly: providing the "right info, in the right format, at the right time". LangChain further categorizes context engineering into four concrete strategies: write, select, compress, and isolate—each essential in guiding a model’s reasoning.

This resonated deeply with me. I realized I'd been instinctively applying these principles, though without formalizing them. Discovering this well-defined methodology gave me a roadmap to deepen and refine my practice.

VS Code 1.101: Custom Chat Modes and Tools

The recent release of Visual Studio Code 1.101 took Context Engineering to another level. Now, creating and using custom contexts doesn't involve complicated hacks. You simply place a markdown file describing your chat mode, e.g. for your project, put it into .github/chatmodes/, and VS Code automatically makes it available within GitHub Copilot.

Here's a practical example of a custom chat mode file:

---
description: 'Generate an implementation plan for new features or refactoring existing code.'
tools: ['codebase', 'fetch', 'findTestFiles', 'githubRepo', 'search', 'usages', 'context7', 'sequential-thinking', 'microsoft-docs']
---
# Planning mode instructions
You are in planning mode. Your task is to generate an implementation plan for a new feature or for refactoring existing code.
Don't make any code edits, just generate a plan.

The plan consists of a Markdown document that describes the implementation plan, including the following sections:

* Overview: A brief description of the feature or refactoring task.
* Requirements: A list of requirements for the feature or refactoring task.
* Implementation Steps: A detailed list of steps to implement the feature or refactoring task.
* Testing: A list of tests that need to be implemented to verify the feature or refactoring task.

By selecting this mode, GitHub Copilot immediately generates an implementation plan for new features or refactoring existing code.

Leveraging Community Wisdom: awesome-copilot-chatmodes

Exploring community resources like the awesome-copilot-chatmodes repository was another critical turning point. The repository is a treasure trove of ready-to-use chat modes, prompts and instructions; from rigorous code reviewers to supportive debugging assistants. It provided practical insights into how structured, predefined contexts can significantly enhance AI performance.

Experimenting with these resources, adapting them, and integrating them into my workflow significantly streamlined my tasks.

My Context Stack: MCP Tools I Can’t Live Without

Three MCP tools, in particular, transformed my development experience:

  • Context7: Automatically retrieves the latest documentation and practical code snippets. This ensures GitHub Copilot delivers up-to-date and accurate code suggestions rather than outdated or generalized information.
  • Sequential Thinking: Reveals the AI's chain of thought step-by-step. This visualization helps me instantly verify whether the AI model fully grasped my instructions, making it easy to refine prompts and instructions in real-time.
  • Microsoft Docs MCP: Provides authoritative references for .NET and Azure, ensuring Copilot aligns with official Microsoft standards and best practices.

Using these tools not only improved the immediate quality of AI-generated outputs but significantly increased my confidence and understanding of the model’s reasoning.

The Importance of Planning First, Coding Later

One crucial lesson from this experience was reaffirming the importance of meticulous planning. Before initiating any coding session, I adopted a structured preparation approach:

  1. Clearly outlining project goals, constraints, and success criteria.
  2. Curating and preparing relevant documentation snippets, best practices, and examples.
  3. Establishing preliminary test scaffolds to communicate intent.
  4. Selecting an appropriate, context-specific chat mode.

This structured approach helped ensure GitHub Copilot produced predictable, aligned, and reliable outputs, vastly reducing instances of hallucinations or off-target suggestions.

Real-World Results: Wins and Lessons Learned

The results were remarkable:

  • Significant time savings: Tasks such as generating FastAPI router boilerplates went from approximately 20 minutes to less than 5 minutes.
  • Improved reasoning: Sequential Thinking instantly flagged logical oversights—such as missing rate-limiters or incomplete middleware implementations.
  • Heightened awareness: Despite its capabilities, Copilot occasionally produced incorrect Python code; it keeps adding new functions to the top, before the imports section, underscoring the need for careful human review.

From this experience, I distilled several best practices:

  • Iterative prompt refinement: Minor wording adjustments significantly improved outputs.
  • Testing as a guardrail: Pre-defined tests ensured functionality remained within expected boundaries.
  • Focused context bundles: Precise, concise contexts proved more effective than broad, general-purpose prompts.
  • Mentoring mindset: Treat AI like a junior developer, guiding and reviewing its contributions carefully.

Looking Forward: Next Steps in My Context Engineering Journey

There’s still much to explore. On my roadmap:

  • Crafting more specialized persona modes for testing, security, and performance optimization.
  • Play with Claude Code.
  • Developing debug-aware prompts to automatically detect and address errors more efficiently.

If you've explored these avenues or discovered innovative techniques, I would love to exchange ideas and experiences.

Your Turn: Dive into Context Engineering

I encourage you to experiment with Context Engineering today. Start by creating a simple custom chat mode for your next project—it's easier than you think and immensely rewarding. Share your experiences, discoveries, and insights in the comments below. Let’s keep learning and improving together!

Happy coding!

0
Subscribe to my newsletter

Read articles from Jorge Castillo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jorge Castillo
Jorge Castillo

I’m a seasoned software architect and technical leader with over 20 years’ experience designing, modernizing, and optimizing enterprise systems. Lately I’ve been harnessing large language models—integrating agents like GitHub Copilot, Cline, and Windsurf—to automate workflows, build n8n and VS Code extensions, and power custom MCP servers that bring generative AI into real-world development. A cloud-native specialist on Azure, I’ve architected scalable, resilient microservices solutions using Service Bus, Cosmos DB, Redis Cache, Functions, Cognitive Services and more, all backed by DevOps pipelines (GitHub Actions, Azure DevOps, Terraform) and strict IaC practices. Equally at home crafting UML diagrams, leading multidisciplinary teams as CTO or tech lead, and championing agile, TDD/BDD, clean-architecture and security best practices, I bridge business goals with robust, future-proof technology solutions.