2025: The year of the AI dev tool tech stack

Table of contents
- Everyone has AI pain points now
- A new way to code = A new tech stack
- What’s in the AI dev tool tech stack?
- Foundational: AI coding tools
- Essential layer: AI code review tools
- Optional layer: AI QA test generation & execution tools
- Optional layer: AI Refactoring tools
- Optional layer: AI documentation tools
- Sample stacks
- Building your own AI dev tool stack: What to consider

In April, Microsoft and Google announced that AI is generating 30% of the code at their companies. That indicates that AI coding tools have entered a new phase. They’ve become a significant part of engineering workflows – even at large, enterprise companies.
With Dev Twitter obsessed with vibe coding these days, the question many devs we’ve been talking to are asking is what does all this AI use actually look like? Are developers vibe coding whole features for production using agentic coding capabilities? Or are they using AI primarily for tab completion and early prototyping?
Ultimately, devs want to know what successful AI adoption really looks like across teams, companies, and industries. What AI tools are teams actually using? How are they getting real value from them? What rules, if any, are companies putting in place around AI usage? Are AI coding tools really boosting productivity or just helping teams code faster, but with more bugs?
At CodeRabbit, we talk to hundreds of engineering teams every month about how they're using AI. That gives us early visibility into trends around AI adoption, and in the last few months, we've seen striking similarities in the ways development teams are thinking about AI.
Let’s dive into what we’re hearing from customers – and why it’s convinced us 2025 is the year of the AI dev tool tech stack.
Everyone has AI pain points now
It likely comes as no surprise that the teams we talk to tell us that one of the major pain points of their AI coding tools is that the productivity and DevEx gains they deliver are inconsistent. With studies finding that AI coding tools can add up to 41% more bugs to your code, these tools have come with new challenges.
A couple of weeks ago, Ryo Lu, Cursor’s Head of Design, wrote a thread about the potential downsides of using Cursor to write code. In it, he listed 12 steps to take if you don’t want to end up with AI spaghetti you’ll be cleaning up all week.
A tool that requires a 12-step guide for avoiding disastrous spaghetti code might be fine if you’re vibe coding a hobby project or on a team of mostly senior devs who can catch and edit out the spaghetti, but imagine what a junior developer could do to a legacy codebase in a highly regulated Fortune 500 company!
In addition to more bugs and issues, we’re also hearing that AI coding tools have created bottlenecks at other points of the development cycle.
It goes without saying that if you’re writing more code, you have to review more code, test more code, document more code, and refactor more code. Very quickly, your ‘game-changing’ AI productivity gains get held up at other manual parts of the development cycle. And that work can be harder and more time consuming given AI-generated code’s tendency to have more issues.
A new way to code = A new tech stack
That’s why many devs have come to an important realization this year: You can’t just introduce a transformative technology and leave the rest of the software development cycle intact. You need an end-to-end AI dev tool tech stack.
It’s common for disruptive technologies to spark broader ecosystem changes. A great example is how GitHub’s 2008 launch resulted in the launch of both Circle CI and Jenkins three years later. AI coding tools seem to be following an even faster timeline.
After a few years of using them, engineering leaders have realized that AI coding tools help sometimes but hurt sometimes, too. To actually realize the promised productivity gains, they need additional tools for the downstream tasks they create or make more difficult.
But this shift to thinking about AI adoption as a stack is also about using the same approach of leveraging AI to boost productivity that worked for code generation for other manual tasks. Why not review faster and test faster if you’re coding faster? Especially since almost no one loves reviewing code or writing tests?
In some cases, the ROI of leveraging AI at other stages of development might even be higher than what AI coding assistants deliver. That’s because those AI tools work to remove bugs from code rather than adding them in.
What’s in the AI dev tool tech stack?
The AI dev tool stacks we’re seeing our customers adopt are a layered set of AI tools that support every stage of the software development lifecycle.
Here’s a quick look at the layers of that stack, how they fit together, and why you’ll probably be using most of them by the end of this year – if you aren’t already.
Foundational: AI coding tools
Essential layer: AI code review tools
Optional layer: AI QA test tools
Optional layer: AI refactoring tools
Optional layer: AI documentation tools
Foundational: AI coding tools
This is where most teams start. These tools help developers write code faster – either by suggesting autocompletes of what you’re currently writing or by generating entire functions, tests, or components based on natural language prompts. Over time, they’ve become more sophisticated with deeper codebase awareness, a greater commitment to code quality, and a recent focus on agentic, multi-step tasks. But these tools are still notorious for introducing bugs, vulnerabilities, and performance inefficiencies into code. That translates into developers doing a lot more code editing and reviewing.
Increasingly, we’re hearing two things. First, devs aren’t just using one tool but often leveraging multiple tools based on what each tool is best at (a process satirized in this tweet). Second, devs are increasingly opinionated about which tool or tools they want to use – with the choice of an AI coding assistant becoming as divisive as whether to use a PC or a Mac.
That’s led many teams to start giving developers a choice around AI assistants rather than choosing just one to buy licenses for. Given that they’re likely to also be more effective at using the tool they prefer – that benefits companies, too.
We break these tools into five categories – though many tools span multiple categories.
Tab completion tools: GitHub Copilot, Cursor Tab, Windsurf, TabNine, Sourcegraph Cody, Qodo, Jetbrains
AI coding assistants: GitHub Copilot, Cursor, , Windsurf, Claude Code, OpenAI Codex CLI, Zed, Cody by Sourcegraph, Aider, Qodo, Cline, Roocode, Blackbox, OpenHands, Gemini Code Assist, Augment Code, Amazon Q, JetBrains AI Assistant
Agentic coding tools: Cursor, Windsurf, GitHub Copilot, Claude Code, OpenAI Codex, Cline, Roocode, Blackbox AI, Continue, Devin, Jules, Augment Code, OpenHands
AI app generator tools: Lovable, v0, Bolt, Builder.io, Figma Make, Fine.dev, Stitch
Codebase context tools: Repomix, Repo Prompt, Context7
Essential layer: AI code review tools
AI code review tools sit at the center of the stack because they directly address the biggest bottleneck introduced by AI coding tools: the review process. If your code is getting written faster — and more often — by machines then you need a better way to review it.
Trying to manually review increasingly more code as a team isn’t just a recipe for burnout, it also risks quality degradation. Research shows that most devs can only manually review up to ~400 lines of code before fatigue sets in. That fatigue could mean devs miss more critical bugs then have to address them in production.
Indeed, code review tools don’t just help you merge PRs up to 4x faster and reduce the time you spend reviewing by up to 50%. They are also essential in AI-assisted development to keep bugs from production given that AI coding tools have been found to add up to 41% more bugs to code. Using them protects your AI productivity savings by ensuring no bad code ends up in production.
AI code reviews also help improve code quality, reduce reviewer fatigue, and standardize best practices across teams no matter which AI coding assistants your team members are using. Unlike code generation and agentic coding tools, their output isn’t wildly inconsistent since it doesn’t depend on the AI competency of any individual developer to know how to prompt them.
But, perhaps more importantly, they leverage AI for what it’s best at – automating repetitive and tedious tasks devs don’t want to do. Who wants to spend an hour adding a dozen comments to a PR when AI can add most of those comments for you, give you easy 1-click fixes for each of them, and find bugs you might have missed?
These tools come in three main flavors:
Features of an AI coding tool: Cursor, GitHub Copilot, JetBrains, Windsurf Forge (deprecated)
Git-based AI code review tools: CodeRabbit, Bito, Greptile, Qodo, Graphite Diamond
Both IDE and git-based AI code review tools: CodeRabbit, SonarQube, Qodo, Sourcery
Optional layer: AI QA test generation & execution tools
For many dev teams, QA testing has long included some form of AI. But a new generation of AI-powered QA tools promise to automate even more of the grunt work – especially around generating and maintaining tedious end-to-end tests that simulate real user journeys. Instead of manually thinking up every scenario, you can let an AI generate test cases or even entire test scripts from a natural language description of what needs to be checked.
The benefits are hard to ignore. The most important is speed – they can churn out or execute suites of tests in a fraction of the time and generate dozens of scenarios at once. However, they also help achieve greater breadth of coverage by running through permutations a human might overlook or not have time for.. Some even offer self-healing capabilities to adjust tests when your UI or data changes, reducing maintenance headaches and keeping your test suite running smoothly as the app evolves.
We break these down into two categories:
AI test generation tools: Testim, Mabl, Functionalize, testRigor, Autify, ACCELQ, Qodex, Tricentis
AI test execution and maintenance tools: MuukTest, Applietools, Sauce Labs, Perfecto, Meticulous
Optional layer: AI Refactoring tools
While some AI coding tools claim they can be used for refactoring, their results are often lackluster. For that reason, many companies adopt AI tools created explicitly for refactoring code as part of their AI dev tool tech stack after they’ve had bad experiences attempting to use coding tools for that use case.
AI-powered refactoring tools promise to automate the tedious and repetitive aspects of improving your codebase from minor optimizations to significant architectural changes. Instead of spending hours manually hunting down inefficiencies or repeating the same structural tweaks across your codebase, these AI tools quickly identify and even execute refactoring opportunities from a simple natural-language description.
We divide these tools into two types:
Semi-automated tools: CodeGPT, GitHub Copilot, Amazon CodeWhisperer, Sourcegraph Cody
Fully automated tools: Claude Code, Devin, OpenAI Codex
Optional layer: AI documentation tools
While docs are never the first thing that teams think about when adopting AI, it’s one task that they appreciate getting help with when they do. These tools tackle one of coding’s most dreaded tasks—writing and updating code documentation like inline comments to docstrings. Instead of manually documenting every new function or combing through outdated guides, devs can let AI tools quickly draft readable, up-to-date documentation directly from the code itself, saving countless hours of tedious work.
- Code-level docs tools: DeepWiki, Cursor, CodeRabbit, Swimm, GitLoop, GitSummarize
Sample stacks
So, what do some of these AI dev tool tech stacks look like? We’ve seen a range of configurations from company to company but here are some common stacks teams are using.
‘Comprehensive’ stack
There’s a growing group of companies we encounter who have implemented or are in the process of implementing an end-to-end AI dev tool stack that includes an AI-powered coding tool, code review tool, QA tool, refactor tool, and docs tool.
These are typically companies where there’s been significant internal leadership around AI adoption either from the C-Suite or engineering. They were also often early adopters of AI coding tools and have already seen their benefits so are looking for additional AI productivity and DevEx gains.
‘Choose-your-own-AI-tool’ stack
We are increasingly seeing companies that are implementing AI tools throughout the development cycle AND giving their team more choice as to which tools they use. These companies understand (or have learned the hard way) that different AI tools are best suited for different kinds of work and that the best AI tool for any developer is the one they feel most comfortable prompting.
This strategy hasn’t just anecdotally helped increase AI adoption but it’s also improved developer satisfaction and experience at these companies. That’s because, increasingly, developers are opinionated about which tool they use. Some companies offer developers choice over just their AI coding tool (Cursor, Copilot, or Claude Code?) while others will offer devs choice over other tools in the stack, as well.
‘Multiple coding tools’ stack
Not to be outdone by the companies that let developers choose their own AI tools are the companies that let devs choose multiple AI coding tools. Maybe they use Lovable for prototyping UI and then Cursor to write the app. Or they use TabNine for code completion and ChatGPT for code generation. More companies are saying yes to developers using more than one tool if they can make the case for why it will improve their productivity.
‘Partial’ stack
Not all companies that we’re seeing building an AI dev tool stack are adopting all the tools in the stack. Typically, however, their stacks involve an AI coding tool, an AI code review tool, and another AI tool from our list – be that an AI refactoring tool, an AI QA tool, or an AI docs tool. Which they adopt often depends on their codebase, internal expertise, and needs. For example, larger companies are more likely to adopt AI QA tools since they have a large enough team internally to manage QA whereas smaller companies are more likely to mostly outsource QA to contractors and agencies.
‘Essential’ stack
Finally, we see a lot of companies building just an ‘essential’ stack which includes just an AI coding tool and an AI code review tool to help navigate the added bugs and more complicated code reviews that typically result from using coding assistants. Code review tools also have some of the highest ROI of any AI tools – including AI coding tools – since they both save significant time and keep bugs out of production.
Building your own AI dev tool stack: What to consider
When it comes to building an AI dev tool stack, we’ve seen a number of approaches. Many adopted AI coding tools and then iteratively looked for individual solutions to the problems those tools created as downstream issues became particularly painful.
Other companies took a more intentional approach with CTOs or other technical leaders investigating tools that could improve the development cycle and running proof-of-concept tests to see whether they actually deliver results. Some even waited to adopt AI coding tools and leveraged AI code review tools to address their existing code review backlogs first.
We recommend a proactive approach since we often see teams suffering from delayed milestones and dev burnout before they start looking for solutions.
Want more info about what we’ve been seeing around AI adoption of specific tools? We have another post here where we go into greater details about the different types of tools in each category and how we’re seeing them helping engineering teams.
We’d love to hear more about how you’re building your AI dev tool stack and what’s working for you. Tag us on Twitter or LinkedIn.
Interested in trying out our AI code review tool? Get a 14-day free trial!
Subscribe to my newsletter
Read articles from Aravind Putrevu directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Aravind Putrevu
Aravind Putrevu
Engineer | Tech Evangelist | FOSS Enthusiast