Code quality and review in the age of AI: less control, more responsibility
Posted: Thu Jul 10, 2025 3:41 am
Tools like GitHub Copilot, CodeRabbit, and Cursor aren't just speeding up production—they're also redefining what we mean by code quality and how we ensure it.
However, the most recent data, such as the DORA 2024 report , warns us: more code doesn't mean better software. In fact, teams that adopt AI without a clear strategy experience a 7% drop in stability and a 1.5% drop in delivery speed. It seems counterintuitive, but it makes sense. AI boosts pace, but it also amplifies errors if there's no solid quality process to support it.
Recommended articles before continuing reading:
Vibe-coding with AI
A top-down approach to programming country email list with artificial intelligence, with real-life use cases and ready-to-use prompts.
ITDO Blog - Web, App, and Marketing Development Agency in Barcelona
AI in development teams: from vibe-coding to collaborative work
Artificial intelligence is no longer a promise: it's a reality that permeates the entire software development process.
ITDO Blog - Web, App, and Marketing Development Agency in Barcelona
The risk of writing more without understanding more
The first side effect is the loss of control. When we generate large blocks of code in seconds, we lose some of the deep knowledge of the system. Pull requests become larger, more complex, and harder to review. Even worse, AI makes mistakes that are different from humans: it desyncs versions, deletes functionality without warning, invents names or functions. And since these are not our usual mistakes, it takes us longer to detect them.
Therefore, today more than ever, software quality is not a final phase or an isolated responsibility: it is a complete system that must be integrated from the first step of the development cycle.
Quality as a system, not as a moment
The teams that are best integrating AI into their workflows share one idea: quality isn't guaranteed at the end; it's built in from the beginning. This requires a process based on six pillars that reinforce each other:
Codified and shared practices : design principles, architectural patterns, naming conventions, security best practices, and accessibility. These aren't rules to limit creativity, but rather a common language for building better and reviewing faster.
True collaboration (pair programming, synchronous reviews) : Shared design sessions or pair programming aren't a waste of time. They're a direct investment in collective knowledge, better decisions, and fewer errors.
Static and AI-assisted analysis : If a machine can detect stylistic errors, code smells , or security issues, all the better. This way, we reserve human energy for what matters: architectural decisions, trade-offs, clarity of purpose.
Automated testing as a first line of defense : Not all tests are equal. Prioritizing integration tests and those that cover critical business flows is key. They are your safety net and your living documentation.
Reviews as learning, not policing : If your process relies on serious bugs being detected in the final review, something is wrong. Code reviews should gradually reduce their scope and focus on transferring knowledge, detecting new patterns, and enriching shared practices.
Continuous feedback and improvement loops : Each part of the process can feed into the rest. If a review detects a common pattern, it can be turned into a new static analysis rule. If a pairing conversation improves the design, document it. It's a virtuous cycle.
However, the most recent data, such as the DORA 2024 report , warns us: more code doesn't mean better software. In fact, teams that adopt AI without a clear strategy experience a 7% drop in stability and a 1.5% drop in delivery speed. It seems counterintuitive, but it makes sense. AI boosts pace, but it also amplifies errors if there's no solid quality process to support it.
Recommended articles before continuing reading:
Vibe-coding with AI
A top-down approach to programming country email list with artificial intelligence, with real-life use cases and ready-to-use prompts.
ITDO Blog - Web, App, and Marketing Development Agency in Barcelona
AI in development teams: from vibe-coding to collaborative work
Artificial intelligence is no longer a promise: it's a reality that permeates the entire software development process.
ITDO Blog - Web, App, and Marketing Development Agency in Barcelona
The risk of writing more without understanding more
The first side effect is the loss of control. When we generate large blocks of code in seconds, we lose some of the deep knowledge of the system. Pull requests become larger, more complex, and harder to review. Even worse, AI makes mistakes that are different from humans: it desyncs versions, deletes functionality without warning, invents names or functions. And since these are not our usual mistakes, it takes us longer to detect them.
Therefore, today more than ever, software quality is not a final phase or an isolated responsibility: it is a complete system that must be integrated from the first step of the development cycle.
Quality as a system, not as a moment
The teams that are best integrating AI into their workflows share one idea: quality isn't guaranteed at the end; it's built in from the beginning. This requires a process based on six pillars that reinforce each other:
Codified and shared practices : design principles, architectural patterns, naming conventions, security best practices, and accessibility. These aren't rules to limit creativity, but rather a common language for building better and reviewing faster.
True collaboration (pair programming, synchronous reviews) : Shared design sessions or pair programming aren't a waste of time. They're a direct investment in collective knowledge, better decisions, and fewer errors.
Static and AI-assisted analysis : If a machine can detect stylistic errors, code smells , or security issues, all the better. This way, we reserve human energy for what matters: architectural decisions, trade-offs, clarity of purpose.
Automated testing as a first line of defense : Not all tests are equal. Prioritizing integration tests and those that cover critical business flows is key. They are your safety net and your living documentation.
Reviews as learning, not policing : If your process relies on serious bugs being detected in the final review, something is wrong. Code reviews should gradually reduce their scope and focus on transferring knowledge, detecting new patterns, and enriching shared practices.
Continuous feedback and improvement loops : Each part of the process can feed into the rest. If a review detects a common pattern, it can be turned into a new static analysis rule. If a pairing conversation improves the design, document it. It's a virtuous cycle.