10 Rules for Writing Production-Ready Code with AI

Bogdan BujdeaBogdan Bujdea
12 min read

Welcome to the second edition of The Copilot’s Log.

If you’re a developer experimenting with AI tools like Copilot, Cursor, or Claude, you’ve probably seen both ends of the spectrum: blazing-fast progress… and mysteriously broken code.

That's because AI can be an incredible force multiplier, but only if you use it the right way.

In this edition, I’m sharing 10 habits and principles I’ve picked up from building real products with AI tools. These aren’t abstract theories or copy-paste workflows, they’re lessons learned the hard way, after shipping features, breaking things, and figuring out what actually works.

Let’s dive in.

1. Know How LLMs Think (Because They Don't)

LLMs (Large Language Models) like GPT or Claude generate code by predicting the next most likely word or token—just like a hyper-advanced autocomplete trained on billions of code and text examples.

Imagine you type: “The quick brown fox jumps over...” The model fills in: “the lazy dog.” It’s not thinking, it’s guessing what’s most likely to come next based on patterns in its training data.

When you ask for code, it’s the same. The more details you give (context, requirements, style), the more likely you’ll get something useful and accurate. If you’re vague, you’ll get a generic or even wrong answer—because the model is just picking from what it’s seen before, not truly understanding your intent.

But the amount of context you can provide is limited by the “context window” of the model (how much text it can process at once). If you try to include your entire codebase, it will start forgetting details from earlier in the conversation.

But the one thing you should understand here is that prompts are an important skill—you shouldn’t expect an LLM to provide good answers with vague requirements. It’s not magic; your results depend on how you ask.

More details on how LLMs really work and how to master context will be in the next edition.


2. Zero Trust Coding: Always Review the AI’s Work

As I said earlier, the better the prompt the more accurate the results, but how can you be sure that you gave the best prompt that provides the best result? Even more, what if there is no result but the LLM still provides incorrect ones while being sure they are valid?

No matter how good the suggestion looks, always review every line, especially for anything beyond throwaway code.

Why? As I shared last time: I once asked Copilot to remove a C# entity from a microservice. It “helped” by deleting a database table I didn’t mean to touch. We lost QA data, and over 50 people were blocked for hours. The AI did what I said (and more), because I didn’t review the changes closely enough.

Practical Advice:

  • Treat every AI commit like a PR from an overeager junior dev.

  • Use git diff to inspect all changes, especially deletions and multi-file edits.


3. Iterate in Small Steps

Don't try to generate an entire feature from one prompt. Break your work into multiple steps: focus on a small piece at a time, make sure it works, and always commit or stage before moving on. This way, you always have a clean, working state to return to. Plus, if you break something, it’s easy to pinpoint what changed or roll back to your last safe spot.

Example: Let's say you use this prompt: “Refactor the OrderService class and fix performance issues”. You might end up spending hours talking with AI and getting nowhere, kind of like this guy:

Minimize image

Edit image

Delete image

Instead, you should do it like this:

Step 1: Stage all your changes with git (git add .*)*

Step 2: Use a focused, targeted prompt for just one small change. For example: "The OrderService class has many functions that use the same code for authentication. Move that code into a helper function and call it to prevent duplication.”

Step 3: Review the code, at this stage you have 3 options:

  • Code is flawless: continue to step 4

  • Code needs small changes from your part: make the changes and continue

  • The code has too many issues: just reset the changes and try again with a different prompt, this time making the necessary adjustments so that it doesn't end up in the same state. For example, if the duplicated code is removed but the helper function does something completely unrelated from what it was doing before, then say "T*he OrderService class has many functions that use the same code for authentication. Move that code into a helper function and call it to prevent duplication. **The helper function should keep the same logic as the duplicated code that we remove".* Send the prompt and go back to Step 3, if you do this 2-3 times and it doesn't work, just do the refactoring yourself! Worst case: you’ve lost 30 minutes experimenting with AI instead of spending a day doing the refactoring manually, so it’s not a waste of time in my opinion.

Step 4: Once you have code that you want to keep, stage your changes. Sometimes the code might be 99% perfect, and you just want a small tweak (like updating the text of a button), but the AI updates every button in the app instead. If this happens, reverting manually could take ages, but with git you can instantly roll back and get back to your 99% working state. Staging your changes early and often is the safety net that saves you from these moments.

Step 5: We'll now use this prompt: “The ReadOrders function takes too long and I know the query for retrieving the orders is the main issue, give me at least two ways it can be improved.”

Step 6: You should now have at least two options for improving the performance, choose one or continue the conversation until you find an acceptable solution. If the AI is unable to provide a solution, then maybe you need to give it more details. For example, the LINQ query might be perfect so the AI can't give any more solutions, but if you provide the database structure in the context it can suggest creating an index. That's why it's important to give as many details as possible.

Step 7: Continue this cycle for each sub-task. With this approach, something that used to take a day or two might be done in 30 minutes (on a good day!).

The key is to work incrementally, stage your changes often, and never be afraid to reset and try again. This habit saves you hours of debugging and gives you confidence in every step.


4. Play to AI’s Strengths

LLMs truly shine when you use them for what they do best: generating and summarizing text. Their sweet spot includes:

  • Writing Documentation: Give the AI a few bullet points or a rough outline and it can produce a professional, typo-free README or even detailed tickets for Jira. You’ll be surprised how much time you save and how much clearer your docs become.

  • Generating Scripts: Whether you need a bash script, a one-off migration, or a quick automation, AI is fast, reliable, and generally accurate for these bite-sized, isolated tasks. Scripts are usually just a single file, without hundreds of dependencies spread across a codebase, making LLMs ideal for generating/changing scripts quickly and safely.

  • Brainstorming: Stuck on naming, architectural choices, or edge cases? Use the LLM to quickly list pros/cons, generate alternatives, or unblock your thinking, then refine the results with your own expertise.

LLMs were designed to generate natural language, so let them handle the boilerplate and wordsmithing while you focus on building and reviewing.


5. Know Your Tool Inside Out

Each AI coding tool has its own advanced features. It’s like moving from Notepad++ to Visual Studio, but there you only use the Visual Studio text editor. You're literally missing on 95% of its capabilities if you're doing that!

Most devs only scratch the surface of what these AI tools can do by treating them like "chat with AI" tools, they then end up missing out on real productivity gains. For example:

  • Cursor: Earlier I mentioned how I use Git to stage changes between prompts, but did you know that Cursor lets you instantly undo your last set of changes done by AI?

  • Copilot: You can add a copilot-instructions.md file to your repo to give additional context for the work it does in that repository. Say your team uses XUnit and NSubstitute for tests, but each time you ask it to write a new test class it uses MSTest and Moq. Instead of mentioning these libraries in each prompt, just update your copilot-instructions.md file with this: “My unit tests are written with XUnit and I use NSubstitute for mocking.” Copilot will then use your stack by default, letting you focus prompts on what to test.

  • PS: Cursor has "rules" that work the same way as copilot-instructions, and are a bit more advanced, but I'll talk about this in a later edition.

It’s worth taking a few minutes to learn these features. You’ll save yourself hours (and frustration) in the long run, and your AI generated code will be much more accurate.


6. Why Mainstream Stacks Work Best with AI

AI models are only as good as their training data. This doesn't mean you should switch to React just because it's more popular than Blazor. On the contrary, you should use the technology where you're most experienced, so you can catch issues faster and review the AI’s output with confidence.

However, if you sometimes struggle to generate good code for a less popular technology, now you know why: there just isn’t as much high-quality example code for the model to draw from. You can still get good results, but you might need to invest more time in better prompts and context.

On the other hand, if you’re a CTO or tech lead deciding on the stack for a new project, consider that your team will generally get better AI-assisted results with more popular technologies. If fast onboarding and high-quality AI suggestions are a priority, investing in a mainstream stack pays off, not just for you, but for anyone using these tools on your codebase.


7. Enforce Restrictions Early

Turn on strict modes in your language. For example:

  • C#: In your .csproj, set TreatWarningsAsErrors to true.

  • TypeScript: Enable strict mode.

  • And so on...

Why? These guardrails force both you and the AI to write safer, more robust code, and make it much easier to spot potential issues. For example, in C#, I often see warnings for possible NullReferenceException which most often come true. Some devs ignore these, but I always double-check, is this warning real? By doing this, I’ve almost eliminated these errors in my projects, when before they were my most common runtime bug.

Another “restriction” is writing unit tests. If you ask the AI to modify code that’s covered by tests, you can immediately run them to see if it works—no guessing, just feedback.


8. Use MCP servers

I love using MCP servers, they’ve become essential in my workflow. To give you one example, in my current project I use Azure Boards, and the MCP server made onboarding ridiculously easy. For example, if I’m assigned a ticket with ID #1234, I just use a prompt like:

“Read ticket #1234, analyze the codebase based on the description, and determine what changes need to be made. Then provide a summary.”

Remember how long it used to take to do even the simplest task on a new project? You’d have to hunt for the right files, piece together context, and hope you didn’t miss anything. Now, AI can instantly show you where to start and what to look for. You still need to review and understand the code yourself, but with AI guiding you, you get a huge head start.

Although it's a new concept, MCP servers are very popular and now it's very easy to find (or create) one for basically anything. Azure, Github, Todoist, etc.

Here are some places where you can look for MCP servers:

https://docs.cursor.com/en/tools/mcp

https://mcpservers.org/

https://mcp.so/


9. Structure Your Codebase for AI Agents

If you want the best suggestions from AI tools, make it easy for them (and your teammates) to find the right files and understand your project structure. The more predictable and well-organized your codebase, the better AI agents can navigate and make smart recommendations.

Best practices include:

  • Use clear, consistent naming conventions for files, classes, and functions. (e.g., OrderService.cs vs. svc1.cs)

  • Organize code by feature or domain, not just by layer or type.

  • Keep related code together, and avoid giant “miscellaneous” folders.

  • Maintain up-to-date README files and project documentation at the root.

  • Use standard folder names (src, tests, docs, etc.), so AI agents instantly know where to look.

This isn’t just for the AI—future you (and your team) will thank you, too. But with an organized repo, AI tools can connect the dots more easily, suggest relevant changes, and avoid confusion.


10. Stay in the Loop

AI tools evolve fast... sometimes too fast.

I've bookmarked new tools or articles only to find them outdated a month later. What used to take years to change now happens in weeks. Keeping up isn’t optional if you want to use these tools effectively... but let’s be honest, it’s also exhausting.

You already have a full-time job. Staying current often means using your own time, and most of what’s out there is either noise or hype. A lot of popular content leans hard into vibe-coding and “AI will replace devs” takes. Not because it’s helpful, but because it gets clicks.

That’s why I write this newsletter: to offer a more grounded, practical perspective. No hype. No fear. Just real workflows that work in production.

How to stay up to date (without burning out):

  • Follow tool changelogs and release notes

  • Subscribe to developer-first newsletters (like this one)

  • Join communities around the tools you actually use

Stay curious, but filter hard!


That’s a wrap for the second edition of The Copilot’s Log. I aimed to keep it concise while giving you enough to build a solid foundation. In upcoming issues, I’ll dive deeper into each of these practices, with real examples and workflows you can try.

If this was helpful, there’s more where it came from—subscribe to The Copilot’s Log and follow me on LinkedIn for weekly tips, walkthroughs, and lessons from the field.


PS. Beyond writing this newsletter and my day-to-day job, I help dev teams level up their AI workflows through hands-on training. No buzzwords, no slides, just practical sessions focused on using tools like Copilot, Cursor, Claude Code or Windsurf effectively in production environments.

If your team is adopting AI coding tools and wants to get it right from the start, reach out. I’d be happy to help.

0
Subscribe to my newsletter

Read articles from Bogdan Bujdea directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Bogdan Bujdea
Bogdan Bujdea

Expert generalist • Independent Contractor • Microsoft MVP • Home Assistant enthusiast Hi there! I'm Bogdan Bujdea, a software developer from Romania. I'm currently a .NET independent contractor, and in my free time I get involved in the local .NET community or I'm co-organizing the @dotnetdays conference. I consider myself an expert generalist, mostly because I enjoy trying out new stuff whenever I get the chance and I get bored pretty easily, so on this blog you'll see me posting content from programming tutorials to playing with my smart gadgets.