AI Doesn't Fix Your Org. It Amplifies It.

AI Doesn't Fix Your Org. It Amplifies It.

Every conversation nowadays is about AI, buy this tool, ship faster, win. The perception gap between marketing and reality is real and as we navigate the wild-west of experimentation to find our future tools we must stay grounded at what’s achievable. AI tooling is without doubt extremely useful, I can do things in hours and days that would have taken weeks and months pre-AI. The force-multiplier of AI really depends on the foundation it’s being built upon and you have to navigate correctly to achieve your full teams true potential. The 2025 DORA State of AI-assisted Software Development report, drawing on nearly 5,000 technology professionals worldwide, helps guide us. AI is an amplifier. It magnifies whatever your engineering organisation already is.

“think of a band plugging into an amp. If it's a group of professional musicians, brilliant. Turn it up. If it's a high school band of amateurs, you just made the noise a lot louder. You didn't make it better.” Nathen Harvey, who leads the DORA research program at Google Cloud, on the GoTo podcast.

That distinction matters, because most engineering organisations are closer to the high school band than they want to admit.

The AI Amplifier

As a pro-AI user it’s an exciting time. We’re building things at speeds I thought were unimaginable. The business and investors are excited by the efficiencies and simultaneously panicked at being out-competed or losing your moat. A CTO reads about 46% throughput gains from AI-native IDEs.[^1] They roll out licences. Engineers start generating code at pace. And three months later, the same problems exist. They're just happening faster, the problems get amplified. The DORA data backs this up directly. AI adoption is now linked to higher software delivery throughput. This is a reversal from 2024, when AI experimentation made throughput worse. AI adoption can have a negative relationship with software delivery stability.[^2]

“…more than half of respondents deploy less than once a week. When deployments fail, 15% of teams need more than a week to recover. Nearly half say their teams operate ineffectively on at least one axis.” Rebecca Murphey at Swarmia.[^3]

Teams working in loosely coupled architectures with fast feedback loops see real gains from AI adoption. Teams constrained by tightly coupled systems and slow processes see little or no benefit.[^2] The architecture you already have determines whether AI accelerates you toward value or toward a cliff. If your automated test suite is thin, your deployment pipeline is manual, and your architecture couples everything together, AI-generated velocity just feeds more volume into a system that can't handle it. Harvey calls it the bottleneck misalignment problem: AI speeds up code generation (often not the actual bottleneck) and overwhelms downstream processes like testing, review, and deployment.[^6] You save four hours writing code. You lose six hours in slow builds, context switching, review queues, and firefighting. Net negative.

“meetings, interruptions, review delays, and CI wait times cost developers more time than AI saves.” Rob Bowley's analysis of the DX data.[^7]
“Some organisations are doubling their customer-facing incidents with AI, while others are halving them. The difference isn't the tool. It's the system the tool operates inside.” Laura Tacho, CTO at DX on the Engineering Enablement podcast.[^5]

Deal with the Basics

The 2025 DORA report identifies two archetypes which are unlikely to see true AI gains: The "Foundational Challenges" and “Legacy Bottleneck” teams. They operate in survival mode with significant gaps in process and environment. As one Scrum.org summary put it: these teams don't need AI. They need a reset.[^4]

DORA's research goes further than just identifying the pattern. They built a capabilities model identifying seven foundational practices that amplify AI's positive impact. Not seven AI practices. Seven organisational practices. The list reads like a health check, not a procurement guide:

  • A clear, communicated AI policy (not "figure it out yourselves")
  • Healthy data ecosystems
  • AI-accessible internal documentation
  • Strong version control practices
  • Working in small batches
  • A user-centric focus
  • Quality internal platforms

Notice what's missing? Specific tooling. Model selection. Prompt engineering courses. None of those made the list. The factors that determine AI success are the same factors that determined software delivery success before AI existed. Accelerate by Forsgren, Humble, and Kim mapped most of these over seven years ago. AI didn't change the playbook. It raised the stakes. If you couple these with good architecture processes, the twelve factor app, and good process (Shape up) you are pointing in the right direction to leverage true AI gains.

Fixing the Foundation

Even though AI will amplify your bottlenecks, there is plenty to be hopeful for because it will also accelerate fixing your foundational issues. I personally have seen massive gains in automated test harness setup and development, chore reduction, reverse engineering business logic, CI/CD optimization, accelerants in DevEx; all contribute to a better healthier more reliable foundation.

The prescription is straightforward:

Start with architecture. Can your teams deploy independently? Is your monolith constraining everyone? Can you spin up a new environment from code, or is it portal-configured infrastructure that nobody documented? If your architecture forces coordination across teams for every release, AI just helps you coordinate faster. That's not the win you think it is.

Look at your safety nets. What's your test coverage on the core system? Not the greenfield apps. The thing that makes the money. If the answer involves a manual QA gate and a multi-hour end-to-end suite owned by a separate team, you have a testing strategy problem that AI will magnify.

Check your feedback loops. How quickly does a developer know their change works? Not "merged the PR" but "validated in production and the customer saw value." If that loop is measured in weeks, AI-generated speed at the start of the pipeline is meaningless.

Measure the system, not the parts. PR throughput going up while customer incidents also go up is not a success story. Harvey challenges every organisation to put their software delivery metrics side by side with the business metrics they actually care about. When they diverge, that's the signal to investigate.[^8]

The organisations winning with AI in 2026 aren't the ones who adopted fastest. They're the ones who diagnosed honestly, fixed their foundations, and then plugged in the amplifier. The band practiced before they turned up the volume.Your engineering organisation has a sound. AI just makes it louder. Before you invest another dollar in tooling, figure out what that sound actually is.


[^1]: GetDX, AI tooling benchmarks: PR throughput and usage by tool (Q1 2026). Cursor daily users saw a 46% increase in PR throughput from Q4 2025 to Q1 2026.

[^2]: Google Cloud, Announcing the 2025 DORA Report. AI adoption linked to higher throughput but continued negative relationship with stability. Loosely coupled architectures with fast feedback see gains; tightly coupled systems see little or none.

[^3]: Swarmia, What the 2025 DORA report tells us about AI readiness. Analysis of DORA data showing over 50% deploy less than weekly, 15% need 1+ week to recover from failures.

[^4]: Scrum.org, DORA Report 2025 Summary. Summary of seven team archetypes and the observation that struggling teams need a reset, not AI.

[^5]: GetDX, DORA's 2025 research on the impact of AI. Podcast with Nathen Harvey and Laura Tacho discussing divergent outcomes: some orgs doubling incidents with AI, others halving them.

[^6]: GoTo Podcast, State of the Art of DORA Metrics & AI Integration. Harvey on bottleneck misalignment: AI speeds up code generation but overwhelms downstream testing and review.

[^7]: Rob Bowley, Findings from DX's 2025 report: AI won't save you from your engineering culture. Analysis showing organisational dysfunction costs more time than AI saves.

[^8]: GoTo Podcast, State of the Art of DORA Metrics & AI Integration. Harvey: put delivery metrics side by side with business metrics; divergence is cause for investigation.

Cover Image: Muse Live at Alcatraz, Milan (October 26, 2022) licensed under the Creative Commons Attribution-Share Alike 4.0 International license.