The Two-Phase Approach to Coding Agents: Write First, Refactor Smart


The Problem Every Developer Faces with AI Coding Agents
If you've worked with coding agents like Sourcegraph's Amp, Claude Code, or GitHub Copilot Workspace, you've probably experienced this frustration: AI agents are terrible at writing clean, compact code.
Every bug fix becomes three new functions. Every feature request spawns duplicate methods scattered across multiple files. Agents seem to have an allergic reaction to the DRY principle—they'd rather write ten similar functions than refactor one reusable utility.
I spent a lot of time fighting this tendency, providing constant feedback through chat corrections and GitHub PR comments, trying to guide my agent toward cleaner code architecture. The overhead was enormous. I started questioning whether using coding agents was worth the time I spent cleaning up after them.
Then I discovered something that might work much better.
The Breakthrough: Playing to Strengths Instead of Fighting Weaknesses
While agents are poor at preventing code bloat, they're surprisingly excellent at identifying it. This realization led me to develop what I call the Two-Phase Approach:
Phase 1: Let the agent go wild - Focus purely on functionality
Phase 2: Use the agent as a code reviewer - Identify and eliminate bloat
A Real-World Example: Server Shutdown Logic
Let me walk you through a recent project where this approach saved me thousands of lines of code.
I was building a server shutdown system and decided to let my coding agent (Sourcegraph Amp) implement the initial version without my usual micromanagement. The result? A working system spread across multiple files with obvious duplication:
shutdown_
core.py
shutdown_
manager.py
worker_
manager.py
(with shutdown methods)github_mcp_
master.py
(with more shutdown methods)
Instead of trying to prevent this mess during development I finished this one and made sure that everything worked and merged the PR. Then, I started a fresh session with this simple prompt:
we have A LOT of shutdown code and logic:
shutdown_core.py
shutdown_manager.py
worker_manager.py (shutdown_all_workers, _shutdown_single_worker_async, _send_worker_shutdown_request, _wait_for_process_exit)
github_mcp_master.py (_shutdown_workers_enhanced, _wait_for_process_exit, _wait_for_port_release)
First, please check if all these classes and methods are really used (github_mcp_master is the main class)
The Analysis Results Were Eye-Opening
The agent delivered a comprehensive analysis:
USED COMPONENTS:
shutdown_
core.py
- Used by shutdown_manager, worker_manager, resource_managershutdown_
manager.py
- Used by github_mcp_master and github_mcp_workerworker_
manager.py
shutdown methods - All used within worker_manager itselfgithub_mcp_
master.py
shutdown methods - All used within github_mcp_master
REDUNDANCY FOUND:
_wait_for_process_exit
exists in bothshutdown_
manager.py
andgithub_mcp_
master.py
Both
worker_
manager.py
andgithub_mcp_
master.py
have their own worker shutdown logicThe
github_mcp_master
usesShutdownManager
but also has its own_shutdown_workers_enhanced
method
The agent identified the exact duplication I suspected but hadn't taken time to analyze systematically.
From Analysis to Architecture
Instead of jumping straight to code changes, I spent time in conceptual discussion with the agent. We designed a cleaner architecture:
Old Flow (Broken):
Master → POST /shutdown to worker → Worker already shutting down → Endpoint fails → Master immediately kills process
New Flow (Clean):
Master → Send shutdown signal → Worker handles own shutdown → Worker closes connections → Worker exits cleanly → Master waits with timeout
The agent produced a detailed architecture document with transition phases, code examples, and a clear separation of concerns.
The Results Speak for Themselves
When we implemented the new architecture:
+1,440 lines added
-5,772 lines removed
Net reduction: 4,332 lines (75% code reduction!)
More importantly, the resulting code was cleaner, more maintainable, and actually worked correctly.
Key Principles for the Two-Phase Approach
Phase 1: Build for Function, Not Form
Let the agent focus on making things work
Don't interrupt with style or architecture feedback
Accept duplication and bloat as temporary technical debt
Guide only when the agent is completely stuck
Phase 2: Leverage Agent Analysis Skills
Start fresh sessions for review work
Ask specific questions about code organization
Request redundancy analysis across multiple files
Work conceptually before implementing changes
Use the agent to create refactoring plans
Why This Seems To Work Better Than Traditional Approaches
Traditional Approach Problems:
Constant interruptions break agent flow
Premature optimization slows development
Mixed feedback confuses the agent's context
Developer spends more time managing than building
Two-Phase Approach Benefits:
Agent can focus on core problem-solving
Natural separation of concerns (build vs. refine)
Leverages agent's strength in pattern recognition
More efficient use of developer time
Practical Tips for Implementation
Set clear phase boundaries - Don't mix building and refining
Use fresh sessions for analysis - Avoid context pollution
Ask specific questions - "Where is code duplicated?" not "Make this better"
Work architecturally first - Design before implementing changes
Measure the results - Track lines of code, complexity metrics
The Broader Lesson
This experience taught me something fundamental about working with AI tools: instead of trying to make AI do what it's bad at, find ways to leverage what it's good at.
In my experience, coding agents seem to excel at:
✅ Pattern recognition across large codebases
✅ Systematic analysis and comparison
✅ Rapid prototyping and iteration
✅ Following detailed specifications
Where I've observed them struggling:
❌ Preventive architecture design
❌ Maintaining context across long sessions
❌ Balancing multiple competing concerns
❌ Knowing when to stop adding features
What's Next?
I plan to apply this two-phase approach to future agent-assisted projects. While I don't have specific plans yet, I'm curious to see how this pattern might work in other areas of development where agents tend to create complexity.
Conclusion
The future of AI-assisted development isn't about making agents perfect at everything. It's about understanding their unique strengths and designing workflows that amplify those strengths while mitigating their weaknesses.
The two-phase approach has transformed my relationship with coding agents from frustrating to productive. Instead of fighting against their tendencies, I now work with them—and my experience has been very positive.
What's your experience with coding agents? Have you found effective strategies for managing code quality? Share your thoughts in the comments below.
Subscribe to my newsletter
Read articles from Mark Striebeck directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
