The Problem with Vibe Coding and How to Fix It (with Better Prompts)


As large language models become more capable, so does the idea of using “vibe coding” - developing software through natural language instructions. This approach often relies on broad or conversational prompts in place of structured coding.
But how effective is it in practice?
To explore this, we conducted an internal development exercise using Claude 3.5 Sonnet inside the Windsurf IDE to build an internal tool called Scopic People.
The project was used to evaluate AI-assisted development workflows and to document which approaches produced the most consistent results. One key factor that shaped our outcomes was prompt structure.
In this blog, we summarize what we observed in relation to prompt clarity, what limited the model's output, and which specific prompting strategies led to improvements. Let’s dive right in.
All findings below are based on Scopic’s whitepaper - AI-Powered Development: Promise and Perils.
Vibe Coding Has Its Limits
During the development of Scopic People, we found that providing broad or complex instructions often resulted in suboptimal output. Specifically:
- “Initially, providing comprehensive instructions for complex UI components overwhelmed the LLM, resulting in incomplete or incorrect implementations.”
- “Cascade entered repetitive loops, making the same changes or undoing previous work without progress.”
In practice, instructions that combined multiple tasks or lacked precision increased the likelihood of loops, errors, or the need for rework. These observations pointed to the need for prompt refinement and clearer task segmentation.
What Helped: Structured Prompts
Adjustments to the prompting process led to more predictable and useful output. According to the whitepaper:
“Breaking down tasks into smaller, more manageable steps proved highly effective.”
This change allowed us to complete key components of the project, including UI elements, authentication, and service logic with fewer iterations and less backtracking.
After an unsuccessful attempt to generate the entire UI at once, we shifted to a component-by-component approach. This resulted in more consistent outcomes across layout and interface tasks.
Prompting Guidelines That Worked Well
The whitepaper outlines several best practices that helped improve efficiency and output quality during AI-assisted development:
1. Task Granularity
Breaking complex tasks into small, discrete steps rather than providing large, complex instructions.
2. Continuous Review
Reviewing every change generated by the AI before proceeding to catch issues early.
3. Version Control Discipline
Committing changes frequently to maintain a clear history and enable easy recovery from unsuccessful iterations.
4. Strategic AI Usage
Using AI for generating boilerplate, complex logic, and architectural patterns, while handling simple modifications manually.
5. Clear Instructions
Providing explicit, unambiguous instructions with necessary context, especially for domain-specific functionality.
These guidelines reflect the iterative approach used throughout the Scopic People project and offer a practical framework for teams working with AI development tools.
Final Thoughts
Our experience with Claude 3.5 Sonnet and Windsurf showed that prompt clarity was essential for achieving reliable results.
While natural language interfaces offer flexibility, their effectiveness is closely tied to how instructions are structured and reviewed.
Structured prompting helped reduce iterations, improve output accuracy, and maintain forward progress, especially when combined with human oversight.
To read more about the experiment and outcomes, including time savings and full methodology, download the whitepaper:
Subscribe to my newsletter
Read articles from Scopic directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Scopic
Scopic
Scopic is a global software development company sharing real-world insights on AI, dev workflows, and digital product building.