Join us Oct 28 for a session on AI adoption in engineering with DORA and Honeycomb →

What the 2025 DORA report tells us about AI readiness

Rebecca Murphey, Field CTO · Oct 22, 2025

The 2025 DORA report is out, and the results should worry any engineering leaders trying to adopt AI without first establishing an effective engineering organization.

More than 50% of respondents to the annual DORA survey said they deploy less than once a week. When deployments fail, 15% of teams need more than a week to recover. Nearly half of respondents say their teams are operating ineffectively on at least one axis — missing automated tests, still using costly manual QA, still doing tedious deploys and rollbacks, still taking tickets rather than displaying real ownership.

Now, many of these same organizations are looking to AI to solve for years of neglect around core capabilities like CI/CD, automated testing, reliable development environments, secure data access, autonomous teams, and customer centricity.

Alas: As optimistic as I am about the eventual impact of AI, it is at the end of the day a predictive text generator, and quite poor at solving organizational problems.

When your deployment pipeline is held together with bash scripts and That One Engineer, when your test suite is flaky or entirely absent, when nobody knows what’s actually running in production, when production incidents are multi-day affairs — in that world, generating more code faster just exacerbates bottlenecks elsewhere.

The DORA report warns that these bottlenecks will “neutralize any gains from AI”, which I think might be generous. Every line of AI-generated code still needs to go through your review process, your test suite, and your deployment pipeline. If those systems are missing or struggling, pushing more code isn’t going to lead to better outcomes.

What effective AI adoption actually requires

At Swarmia, every engineering organization we work with is trying to figure this out. The ones that are succeeding right now have certain things in common.

First, the cultural prerequisites:

  • Flexibility in procurement and financing. The AI landscape changes monthly. If you’re signing multi-year contracts for AI tools right now, you’re setting yourself up for buyer’s remorse. You need the ability to experiment, fail fast, and pivot without triggering procurement emergencies. Now is not the time to be questioning a $20 AI subscription.
  • Security, legal, compliance, and procurement teams that actually engage. Not the kind that reflexively says no to everything or subjects every decision to a multi-month process, but teams that understand what’s happening and help build guardrails.
  • A culture of experimentation and adaptation. Today’s essential AI tool might be deprecated next quarter. Build with that assumption.
  • Clear ownership models. “AI generated that bug” doesn’t fly. If it’s in your codebase, you own it.

And importantly, the technical infrastructure:

  • Automated testing and real CI/CD. Not the kind where you claim to have CI/CD because Jenkins runs sometimes, but actual, reliable pipelines that give developers confidence. This work isn’t glamorous, but it’s both impactful and essential.
  • An internal platform that’s treated as a real product. This isn’t something that gets thrown together by a few engineers in their spare time. It needs dedicated ownership, clear product thinking, and ongoing investment.
  • A data ecosystem that works. If your teams are still emailing spreadsheets around or begging data teams for reports, you’re not ready for AI at scale. Effective AI programs need streamlined access to the data they need, with proper governance and discoverability built in.
  • Safe mechanisms for AI to access internal data. AI tools need context to be useful, but that context often includes sensitive information. You need infrastructure that lets AI access what it needs while maintaining security boundaries, audit trails, and user privacy.
  • The ability to roll back quickly when things go wrong. And they will go wrong. AI-generated code can introduce issues that aren’t immediately obvious. You need the capability to identify problems and revert changes quickly, ideally within minutes rather than days.

These capabilities take time and investment to build. They’re not exciting, and they won’t make for great demo videos. But they’re what separates organizations that successfully integrate AI from those that just add more chaos to an already chaotic system.

AI creates new ways to break things

AI coding tools also introduce three main categories of risk that traditional development practices weren’t designed to handle:

  • Shadow IT becomes shadow code generation. Folks are increasingly using unapproved AI tools to write production code, answer customer questions, and build tools that access sensitive data. This creates immediate security and compliance risks, as AI-generated code may inadvertently expose sensitive information or violate your data handling policies. Senior leaders compound the problem by creating AI prototypes and expecting developers to ship them to customers without proper security review or testing, bypassing the very guardrails designed to prevent these issues.
  • Engineering-adjacent roles are now writing code. Product managers, designers, and business analysts are using AI to accomplish technical tasks that were previously beyond their capabilities. While this democratization of coding can accelerate certain workflows, it creates significant risks when these roles lack the context to understand security implications, architectural constraints, or technical debt.
  • Architecture drift accelerates. AI doesn’t understand (at least not well) your system’s architectural boundaries, design principles, or domain constraints. It will confidently generate code that appears to work in isolation but systematically violates the patterns your team spent years establishing. The problem compounds because AI-generated code often passes code review — it’s more or less syntactically correct, has basic tests, and solves the immediate problem. By the time you notice the architectural rot, it’s embedded throughout your codebase.

Same metrics, different bottlenecks

Many organizations track vanity metrics like “lines of code generated by AI” or “AI suggestion acceptance rate.” These numbers tell you something (how much of your code is AI-generated) but nothing about whether you’re building the right thing or building it well.

The metrics that are useful haven’t changed. DORA metrics like deployment frequency, change failure rate, and time to restore service tell you whether you’re delivering software faster and more reliably.

But since AI is shifting where time gets spent in your development process, developers spend less time writing initial code and more time reviewing and validating it. This means it’s perhaps even more important to keep an eye on cycle time, code review patterns, rework rates, and code quality indicators.

Because if your AI-generated code creates more bugs, longer review cycles, or increased technical debt, you haven’t gained much, if at all.

AI’s biggest impact might not be in engineering

While much of the conversation around AI impact in tech companies focuses on code generation and developer productivity, some of the most transformative impacts are happening on non-engineering teams — and most organizations aren’t ready for it.

A senior engineering leader at a company with roughly 1,200 engineers recently shared with me that their biggest AI wins weren’t coming from the engineering organization at all. Their Trust and Safety team was using AI to better prioritize cases. Support teams were resolving tickets faster. Operations teams were automating workflows that had been manual for years. All without much engineering support.

These teams have always had backlogs of tooling requests that never made it to the top of the priority list. Now, with AI assistance, they’re building their own solutions — or at least, they’re trying to. For teams where even a buggy internal tool would be transformative, AI can give capabilities that were previously impossible.

If you’re busy measuring AI impact on engineering productivity, you might not be investing appropriately in engineering-adjacent use cases that deliver more value. Marketing might be building lead generation tools that would have taken months to get prioritized. Finance and operations might be eliminating entire categories of manual work. Support teams might be resolving tickets in minutes instead of hours.

AI is going to empower non-engineering teams whether you plan for it or not. Without secure, governed platforms, they’ll build their own solutions — and unlike engineering teams, they may not understand the security implications, data governance requirements, or technical debt they’re creating. The question is whether you’ll enable this transformation safely, or scramble to contain it after the fact.

Why smaller companies have an edge (but might not keep it)

Everything outlined above is significantly easier for smaller, earlier-stage companies. When you have five or 10 engineers instead of 500, you don’t need to “turn the ship around” — you can point it in the right direction from the start.

Smaller organizations have inherent advantages in the AI era. They can make procurement decisions without navigating layers of approval. They can pivot quickly when something isn’t working. They don’t have legacy systems held together with shell scripts, or deployment processes that require three days of coordination, or test suites that take hours to run.

But sustaining this agility as they grow? History suggests it’s difficult. The real test will come in five years, when today’s AI-native startups (maybe?) have 500 engineers instead of 50.

Meanwhile, the companies struggling most with AI adoption spent the last decade ignoring fundamentals that would have benefited human developers all along. They skipped investing in automated testing, tolerated slow manual deployments, and let technical debt accumulate because shipping features always felt more urgent.

Now they’re adding AI on top of systems that were already frustrating for human developers. Every shortcut they took is now a blocker to effective AI adoption. The irony is that the practices needed for AI would have made them better engineering organizations years ago, if they had chosen to invest.

Build for adaptability, not just AI

AI isn’t going to fix broken processes. Indeed, broken processes are going to break your AI initiatives, and fixing them will require more than the latest LLM.

Be proactive about solving issues of quality and stability, and other problems that are hard to AI away. And importantly, be conscious of the tradeoffs among the three pillars of engineering effectiveness: business outcomes, developer productivity, and developer experience. If you do these things, the benefits of AI will likely follow naturally.

But in the end, this isn’t really about AI. It’s about building an engineering organization that can adapt to whatever comes next. AI is just the current test of that capability. And based on the DORA numbers, a lot of us are failing that test.

Good news: you don’t have to figure this out alone
Join us Oct 28 for a session on AI adoption in engineering with DORA and Honeycomb — what the 2025 research reveals, and how engineering leaders are making AI work in practice.
Register now
Rebecca Murphey
Rebecca Murphey helps Swarmia customers navigate people, process, and technology challenges in pursuit of building an effective engineering organization.

Subscribe to our newsletter

Get the latest product updates and #goodreads delivered to your inbox once a month.

More content from Swarmia
Otto Hilska · Sep 3, 2025

Is your software engineering intelligence platform really developer-friendly?

Many software engineering intelligence tools market themselves as “developer-friendly.” It makes sense, because developers are rightfully suspicious about the category. They’ve seen these…
Read more
Lauri Ikonen · Nov 24, 2021

Why product teams should plan together — and how we do it at Swarmia

Planning is one of the skills that separates great product teams from the good ones. Teams that plan well are more likely to release in small and steady increments, build high-quality products…
Read more