AI dev tool stack: How engineering teams are using AI


In our last post, we covered why we think 2025 is the year of the AI tech stack, the layers in the stack, and even shared some sample stacks we’ve been seeing teams using.
Here, we'll dive deeper into how teams are actually putting these tools to work. We’ll look at stack’s layers, the different types of tools in each, and how developers are using these AI tools to speed up their development process or tackle common pain points from adopting AI coding tools.
From codebase context tools that help you prompt AI assistants better to AI code review tools with agentic actions, each layer brings unique solutions to real-world headaches — both those that have always existed and the ones that are specific to AI coding tools.
We’ll walk through practical examples and share how we’re seeing teams integrating AI at every step.
The AI dev tool tech stack
As we mentioned in our previous post, teams are increasingly building out AI dev tool stacks—layered sets of AI-powered tools designed to support each stage of the software development lifecycle.
Here's a quick overview of the stack's layers, how they connect, and why you'll likely start using most of these tools soon.
Foundational: AI coding assistants
Essential layer: AI code review tools
Optional layer: AI QA test tools
Optional layer: AI refactoring tools
Optional layer: AI documentation tools
Foundational: AI coding assistants
AI coding assistants are the foundation for most teams adopting AI tools. Previously, we talked about how these tools span a wide variety of functions from accelerating coding by suggesting autocompletes to even generating entire functions and components from simple prompts.
Increasingly, developers use multiple coding assistants, choosing different tools for different strengths and personal preferences – which helps boost overall productivity and satisfaction. A great example of this is how ChargeLab was able to improve productivity by 40% by allowing their developers and teams to choose which AI tools to adopt.
We break these tools into five categories – though many tools span multiple categories.
Tab completion tools: Autocomplete but smarter
Tools: GitHub Copilot, Cursor Tab, Windsurf, TabNine, Sourcegraph Cody, Qodo, Jetbrains
These tools don’t try to build your entire app. Instead, they help you write code faster and save cognitive effort by providing contextual code suggestions for repetitive coding tasks inside your IDE.
While the buzz around AI coding tools is currently focused on agentic capabilities, tab completion remains the most commonly used AI functionality in companies we’re talking with. We estimate that around 90% of AI coding tool use has so far been with autocomplete tools. That’s likely because they’re focused on complementing the developer over doing work independently so are less likely to introduce or require significant editing.
Some developers prefer tab completion tools over other types of AI assistants since they give them more control over the code they write while still offering time savings over writing it themselves. Their use also tends to be focused on automating simple things like classes and interface names. For that reason, they’re easy for any dev to use.
Increasingly, tab completion tools are more context aware and predictive – understanding the context of your codebase and not just the code you’re currently working on.
AI coding assistants: Context-aware, multi-purpose tools
Tools: GitHub Copilot, Cursor, , Windsurf, Claude Code, OpenAI Codex CLI, Zed, Cody by Sourcegraph, Aider, Qodo, Cline, Roocode, Blackbox, OpenHands, Gemini Code Assist, Augment Code, Amazon Q, JetBrains AI Assistant
AI coding assistants are often part of a new breed of AI-native editors that offer a number of AI coding tools like tab completion, code generation, AI chat, and agentic coding capabilities.
AI coding assistants are best at writing entire blocks of code with inline explanations. They can be incredibly effective at bootstrapping first drafts of new features, generating unit tests, and refactoring. However, they’re more likely to add bugs and issues to your code than a tab completion tool and generally require good prompting to get good results – as this tweet by Cursor attests. For this reason, the quality of suggestions varies depending on the developer's prompting expertise.
Because AI assistants can answer questions as well as generate code, they also help reduce context switching (and time spent on Stack Overflow) when writing code yourself.
While most AI coding assistants are IDE-based like Cursor, JetBrains, Windsurf, Zed, and Copilot (which is in VS Code) – some also operate in the CLI including Claude Code, Aider, and OpenAI’s Codex.
AI coding assistants are more context-aware than tab completion tools and focused on learning your codebase and coding style over time to increase the relevance of their suggestions.
Agentic coding tools: The next frontier (but pricey)
Tools: Cursor, Windsurf, GitHub Copilot, Claude Code, OpenAI Codex, Cline, Roocode, Blackbox AI, Continue, Devin, Jules, Augment Code, OpenHands
These tools often overlap with AI coding assistants but we’ve put them in their own category since not all assistants have agentic capabilities and there are some coding agents which are more focused on agentic coding.
These tools are able to analyze your codebase to determine how best to approach coding tasks or solve problems. Then, autonomous or semi-autonomous agents work to solve those problems or complete tasks like writing tests, testing code, installing several packages, fixing issues in code, or generating new code and raising PRs based on your requests. They also can understand your codebase and summarize files.
Typically, they have the ability to execute specific tasks including modifying code and creating files. They also are integrated into your development environment and can interact with your tools.
Many have the ability to execute tasks autonomously and can do so without direct supervision like Devin, Claude Code. and OpenAI Codex while others make suggestions that you have to approve like Copilot and Windsurf. Depending on the task, developers might prefer one or the other type of tool.
Agentic coding tools are still in the early stages but are evolving fast. In the right hands and with the right tasks, they can offer major returns — or create really creative new bugs.
AI app generator tools: Generate an app or website fast
Tools: Lovable, v0, Bolt, Builder.io, Figma Make, Fine.dev, Stitch
These tools focus on quickly generating entire apps or websites rather than simply completing individual lines of code or generating features. They promise to build full-stack applications rapidly—from frontend UI design to backend infrastructure setup and are integrated with cloud databases. For that reason, they primarily appeal to non-developers. They also likely herald the end of no-code tools since they simplify the process even more.
The increasing popularity of app-generation tools among developers’ is, therefore, often focused on quickly prototyping new ideas rather than creating something that will end up in production.
App-generation tools are more agentic than code generation tools and handle a broader scope of the development process. This can mean significant initial time savings but might require more oversight and editing downstream to customize generated apps to exact specifications. But many devs question whether a generated app might actually be ready for production.
- App-generation tools are rapidly evolving to become increasingly sophisticated – allowing developers to describe application ideas while AI translates these descriptions into functional codebases with minimal manual intervention. However, they remain of limited use to many developers who typically work on pre-existing codebases and applications.
Codebase context tools: Up-to-date codebase context
Tools: Repomix, Repo Prompt, Context7
These tools are crucial enablers for AI-assisted software development. They structure and deliver relevant slices of large codebases to AI models — giving the AI the context it needs to reason effectively across many files.
Developers simply prompt an AI assistant and these AI tools curate the most relevant parts of the codebase to feed into the model, ensuring the assistant isn’t flying blind in large or complex projects.
Codebase context tools can also help compress and structure prompts for AI agents to allow them to maintain a functional understanding of a large codebase within stated token limits — improving the quality of your generated code while reducing the cost when using tools with consumption-base pricing.
Essential layer: AI code review tools
Next up are AI code review tools, a critical layer because they directly tackle the increased workload created by faster AI-assisted coding.
In our previous post, we highlighted how these tools help teams better manage the growing volume of code produced, reducing burnout from manual reviews. AI-driven code reviews not only speed up the process by allowing teams to merge PRs significantly faster – but also greatly improve quality by catching bugs early, reducing reviewer fatigue, and standardizing best practices. Some, like CodeRabbit even have agentic workflows and can help with things like generating unit tests, making multi-file edits, or raising new PRs.
After AI coding tools, AI code review tools are the AI tool dev teams are most likely to adopt — both to deal with existing code review backlogs and to address the glut of AI-written PRs of questionable code quality.
Ultimately, they automate tedious tasks, freeing developers to focus on the high-impact work they genuinely enjoy.
These tools come in three main flavors:
Features of an AI coding tool: AI coding tools review themselves
Tools: Cursor, GitHub Copilot, JetBrains, Windsurf Forge (deprecated)
Some AI coding assistants offer code review tools as features included in their subscriptions or as add-ons. For example, Cursor’s subscription includes IDE-based code reviews and GitHub Copilot’s subscription includes CI/CD-based reviews. Up until April 2025, Windsurf also offered Forge, a CI/CD-based code review tool that was an add-on to their service. However, they recently deprecated it and relaunched code reviews as a feature of their main AI coding assistant.
It makes sense to some to include AI code reviews as part of existing AI coding assistant tools since code reviews are such a core use case for AI in development. However, many question how effective a coding assistant can be when reviewing the code it generates.
With code reviews, the best practice has always been to have peers or senior devs do reviews with a goal of ensuring that several different sets of eyes look for potential issues. Having AI code reviews as a feature of coding assistants deviates from the central security and quality protocol. How can you expect the AI tool that added 41% more bugs to your code to find any of those bugs if it didn’t realize they were bugs to begin with?
What’s more, AI coding assistants often prioritize low latency and real-time responses leading to potentially more superficial code reviews over standalone AI code review tools which focus on quality.
Forge’s depreciation also suggests that, as a feature or an add-on, code reviews are unlikely to be a core focus of product development by AI coding assistants – especially as the space becomes more competitive and companies devote more time to improving their core offerings. That could likely mean standalone solutions will be more comprehensive and have more features making them able to deliver additional value.
Git-based AI code review tools: Reviews that save teams time
Tools: CodeRabbit, Bito, Greptile, Qodo, Graphite Diamond
- These tools run automatic reviews when you open a pull request. First-pass AI code reviews find bugs, security vulnerabilities, syntax errors, stylistic issues, and more in order to save senior engineers time adding comments on issues themselves.
These tools fit perfectly within your CI/CD workflow and existing code review processes while offering PR summaries and 1-click fixes to make it easier to both review and fix issues.
With codebase awareness and enhanced context, these tools can catch common issues early, find bugs you might miss, and enhance code quality across your codebase.
Offerings like CodeRabbit even have agential chat and workflows allowing you to do things like generate docstrings and unit tests, make multi-file edits, raise PRs, and more by simply chatting with the AI reviewer.
Having AI code reviews at the CI/CD stage is critical to streamline the code review process while implementing code quality standards across the codebase.
Both IDE and Git-based AI code review tools: Reviews at every stage
Tools: CodeRabbit, SonarQube, Qodo, Sourcery
Few AI code review tools offer code reviews in the IDE and CI/CD tools. These tools provide the most comprehensive code review support by reducing bugs at multiple stages of the development cycle.
Connecting IDEs with CI/CD reviews also allows for multilayer reviews allowing for a more seamless workflow and additional quality checks.
Optional layer: AI QA test generation & execution tools
We previously discussed how QA testing has traditionally incorporated forms of machine learning or AI, but newer tools are going even further by automating the most repetitive and time-consuming aspects of testing.
These AI-powered tools generate extensive and realistic test scenarios from simple descriptions, significantly speeding up the testing process. Beyond speed, they also enhance test coverage by considering numerous permutations a human tester might overlook. Additionally, some of these tools offer "self-healing" features that automatically update tests when your app’s UI or underlying data changes.
We break these down into two categories:
AI test generation tools: Test generation-only
Tools: Testim, Mabl, Functionalize, testRigor, Autify, ACCELQ, Qodex, Tricentis
AI test generation tools don’t run or manage tests—instead, they automate the creation of test cases, scripts, or scenarios based on natural-language descriptions or by analyzing existing code paths.
Their main appeal is reducing the tedious, repetitive work of manually defining each individual test case to help QA engineers rapidly build out robust test suites.
Developers and QA teams appreciate these tools because they speed up initial test creation and are especially useful when expanding test coverage for large, complex, or legacy applications.
While great for generating volume quickly, these tools typically require manual fine-tuning and review to ensure accuracy and coverage.
Increasingly, these tools leverage deeper context-awareness that allows them to parse existing code and user journeys more intelligently, allowing them to propose test cases that closely align with real-world use cases.
AI test execution and maintenance tools: End-to-end AI test support
Tools: MuukTest, Applietools, Sauce Labs, Perfecto, Meticulous
Full-lifecycle AI QA tools go beyond test generation and handle the entire testing process – from writing test cases to executing them automatically and even maintaining them as your application evolves.
Teams often favor these comprehensive tools because they dramatically reduce QA workload by automating, not just initial test creation, but the ongoing upkeep required when the codebase or UI changes.
Though these tools significantly ease maintenance burdens, they can sometimes struggle with intricate or complex scenarios.
Increasingly sophisticated, these tools integrate seamlessly into existing CI/CD workflows, providing continuous, automated testing coverage across multiple environments and deployment stages.
Optional layer: AI Refactoring tools
Another crucial area is AI refactoring tools. While general AI coding assistants may claim refactoring capabilities, their outcomes often fall short. This has led many teams to adopt specialized AI refactoring tools explicitly designed for optimizing and improving codebases.
These dedicated tools automate tedious tasks, quickly identifying and performing refactoring opportunities based simply on natural-language instructions, drastically cutting down manual effort and enhancing code maintainability.
We divide these tools into two types:
Semi-automated tools: The tab completion of refactoring tools
Tools: CodeGPT, GitHub Copilot, Amazon CodeWhisperer, Sourcegraph Cody
Semi-automated refactoring tools don’t completely take the wheel. Instead, they proactively suggest code improvements within your IDE allowing you to quickly accept, reject, or modify each suggestion.
These tools focus on smaller-scale, incremental refactors—like simplifying methods, restructuring loops, or optimizing function logic—that benefit from a human eye before committing.
Developers prefer semi-automated tools for complex or sensitive refactoring tasks because they offer fine-grained control.
The appeal lies in their balance. They speed up routine refactors while leaving room for developer judgment — minimizing the risk of unwanted changes or subtle bugs sneaking into production.
Increasingly, semi-automated refactoring tools leverage deeper context-awareness, analyzing the broader codebase to offer smarter, more relevant suggestions.
Fully automated tools: When you want to give AI more autonomy
Tools: Claude Code, Devin, OpenAI Codex
These AI tools handle large-scale, repetitive refactoring tasks automatically across your entire codebase, often from just a single set of instructions or rules.
While semi-automated tools highlight refactoring opportunities for devs to review manually, fully automated tools excel at bulk tasks—such as upgrading dependencies, migrating frameworks, or standardizing code styles—potentially saving hours of repetitive work.
These tools appeal to teams looking to tackle technical debt at scale without spending days manually applying the same refactor across hundreds or thousands of files.
Developers appreciate their reliability and consistency for clearly defined, repetitive refactors, but fully automated tools generally work best when given explicit rules. They’re less suited for nuanced code improvements that require human judgment.
Increasingly, fully automated refactoring tools can parse multiple programming languages and integrate directly into existing CI/CD pipelines.
Optional layer: AI documentation tools
Finally, AI documentation tools, while not usually the first thought when adopting AI, have proven incredibly valuable.
As we previously noted, these tools tackle the often-dreaded task of writing and updating code documentation such as inline comments and docstrings. By leveraging AI, developers can quickly generate clear, accurate, and up-to-date documentation directly from their codebase, saving significant time and effort that would otherwise be spent manually maintaining documentation.
Code-level docs tools:
Tools: DeepWiki, Cursor, CodeRabbit, Swimm, GitLoop, GitSummarize
AI documentation tools analyze code structures and behaviors to produce readable, context-aware documentation drafts automatically—potentially cutting documentation time in half or more by generating inline comments, docstrings, API references, or even internal design and architecture docs.
These tools appeal to teams wanting to keep documentation continuously synchronized with code changes without manually updating every function comment or API description as the code evolves.
Increasingly, AI documentation tools support multiple programming languages and integrate directly into IDEs and CI/CD pipelines, proactively prompting devs to document their code as they write it, thus improving doc quality and reducing technical debt over time.
Building your own AI dev tool stack
Adopting an AI dev tool stack isn’t about just throwing a couple new AI tools into the mix. It’s about strategically bringing AI into every part of your development workflow. Using AI strategically at every step –from coding and reviewing to testing, refactoring, and documenting –can help your team get more done, reduce frustration, and significantly boost the overall quality of your codebase.
We’d love to hear more about how you’re building your AI dev tool stack and what’s working for you. Tag us on Twitter or LinkedIn.
Interested in trying out our AI code review tool? Get a 14-day free trial!
Subscribe to my newsletter
Read articles from Aravind Putrevu directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Aravind Putrevu
Aravind Putrevu
Engineer | Tech Evangelist | FOSS Enthusiast