Why Your Organization Is Stuck
The Capability-Dissipation Gap: four structural forces that determine whether your organization compounds or drifts
Every enterprise leader I talk to can tell me what AI can do. The demos are impressive. The benchmarks improve quarterly. The vendor roadmaps promise more.
Almost none of them can tell me how fast their organization absorbs new capabilities.
That’s the number that matters. Not what the technology can do – what your organization can actually reorganize around. And for most enterprises, that number is close to zero. BCG’s latest research confirms the pattern: 71% of organizations now regularly use generative AI, yet more than 80% report no measurable impact on enterprise-level EBIT. Usage is everywhere. Value is not.
The AI capability frontier advances on 3–12 month cycles. Organizational absorption operates on annual-to-multi-year cycles. The distance between those two curves isn’t closing. It’s widening. Every model release pushes the frontier further ahead while your procurement process, your change management playbook, and your team’s working habits stay roughly where they were eighteen months ago.
This widening distance has a name – Nate B. Jones calls it the Capability-Dissipation Gap. And once you see it as a diagnostic framework rather than an abstract observation, it’s governed by four structural forces that have nothing to do with technology.
What’s Actually Holding You Back
When I talk to organizations that are “stuck on AI,” the conversation usually starts with technology. Wrong model. Wrong vendor. Wrong use case. But the diagnostic almost always lands somewhere else entirely.
The research literature identifies at least six distinct inertia forces – regulatory, structural, cognitive, path-dependent, routine, and cultural. In practice, I find they cluster into four diagnostic categories, with cognitive inertia operating as the engine inside cultural resistance, and path and routine inertia as the mechanisms inside organizational friction. Four categories, six forces. The nesting matters because the intervention differs depending on which sub-force is actually binding.
These four operate independently, but they also reinforce each other: regulatory constraints create cover for organizational slowness, which feeds cultural skepticism, which deepens trust deficits. And – critically – they look different from the generic change resistance that every enterprise has been managing for decades.
The Compliance Wall
Financial services firms that want to use AI for compliance work need approval from regulators who haven’t finished writing the rules. Healthcare organizations navigate HIPAA, FDA clearance, and institutional review boards. Government agencies run procurement cycles measured in years, not quarters.
This isn’t irrational. It’s structural. The COBOL systems running an estimated 95% of ATM transactions in the United States aren’t getting migrated because a startup published a blog post. The compliance frameworks governing pharmaceutical trials weren’t designed for AI-assisted analysis and won’t be rewritten on a vendor’s timeline.
What makes this different from every other technology adoption: regulatory cycles are annual. AI capability cycles are quarterly. With ERP or CRM, you could wait for the rules and still catch the technology. With AI, the technology laps the regulation – and every quarter of delay widens the gap between what’s possible and what’s permitted.
Regulatory inertia is the most defensible form of organizational slowness – and the most frequently used as cover for the other three.
The Bureaucracy Trap
This is the process friction layer: procurement cycles, IT governance, change management programs, internal politics. But it runs deeper than bureaucracy.
Two sub-forces make organizational inertia specifically lethal for AI adoption.
Path inertia – organizations channeling AI through existing transformation roadmaps designed for ERP rollouts and CRM migrations. The governance model, the rollout sequence, the success metrics – all inherited from a world where the technology stood still after deployment. AI doesn’t stand still. By the time your 18-month implementation plan reaches Phase 3, the capabilities you scoped in Phase 1 have been superseded twice.
Routine inertia – SOPs and workflows that get partially updated to include AI, producing pilot theatre. Only one-third of AI pilots ever reach production deployment. The AI tool is technically available. Nobody’s job description, incentive structure, or daily workflow has actually changed. Usage plateaus at “occasionally chatting with ChatGPT.” The pilot gets declared a success. Nothing compounds.
In Germany’s Mittelstand – the backbone of Europe’s largest economy – 94% of firms have not implemented AI at all, investing roughly 30% less in AI than the broader market. The gap between “Claude can technically do parts of this job” and “we have reorganized our workflows, retrained remaining staff, built quality assurance processes, and confidently integrated AI into production work” is enormous. This sequence regularly takes eighteen months – even in organizations that are actively trying to move fast. And unlike ERP, the tool you’re still implementing has been superseded by the time you finish.
The Mental Model Problem
“That’s not how we do things here” is the surface expression. Underneath it sits cognitive inertia – fixed mental models about what AI can and can’t do, what “real work” looks like, and who should decide.
“AI can’t do this” is the most common sentence in enterprise AI adoption. It’s sometimes correct. It’s more often a belief that hasn’t been tested since the person’s last interaction with a chatbot in 2024. The capability frontier has moved. Their mental model hasn’t. This is what makes cognitive inertia uniquely destructive in AI: the thing you’re wrong about changes every few months. With ERP, your skepticism could be calibrated once. With AI, last quarter’s accurate skepticism is this quarter’s outdated assumption.
Identity threat compounds the problem. When your expertise is “the person who knows how this process works,” AI doesn’t just offer efficiency – it threatens the source of your organizational value. The rational response isn’t adoption. It’s resistance dressed as quality concerns.
Even the most tech-native organizations aren’t immune. When Shopify CEO Tobi Lütke issued a company-wide memo in April 2025 making AI competency part of performance reviews – requiring employees to demonstrate why a task can’t be done with AI before requesting additional headcount – it confirmed what most organizations haven’t admitted: adoption doesn’t happen by encouragement. It has to be structural. If Shopify needs a mandate, imagine the adoption curve for a German Mittelstand manufacturer or a mid-market insurance broker.
Skill gaps sit here too. McKinsey’s 2025 workplace survey found that only 27% of white-collar workers frequently use AI in their daily work. Most enterprise workers’ AI fluency stops at conversational prompting. They can ask a chatbot a question. They cannot decompose a workflow into specifiable tasks, build evaluation criteria for AI output, or integrate AI into a production process. The distance between “I’ve used ChatGPT” and “I can specify what good AI output looks like for my domain” is the specification gap at the individual level.
In a moving-target environment, unchanged cadence and unchanged job design aren’t signs of stability. They’re early indicators of drift – a distinction I’ll come back to.
The Verification Bottleneck
Not generic distrust of technology. Specific, experience-based skepticism: “We tried this and it didn’t work here.”
The recurring bottleneck in enterprise AI deployment is not capability. It’s verification. The model can generate a contract analysis, a financial forecast, a compliance review. The question is always: who checks the output? How do we know it’s right? What’s our liability if it’s wrong?
These aren’t irrational concerns. They’re the concerns of organizations managing billions of dollars of risk. Anthropic’s research on agentic misalignment found that explicit safety instructions reduced but did not eliminate harmful model behavior under adversarial conditions – a finding that matters when organizations consider deploying AI agents that operate autonomously at scale. Building the institutional trust for that kind of deployment – with appropriate guardrails, audit trails, and human oversight – takes time that no benchmark improvement can compress.
Trust inertia is the hardest force to shortcut because it should only be resolved through evidence. But here’s what makes it uniquely compounding in AI: the evidence you need changes with every model release. The trust you built testing GPT-4 doesn’t transfer cleanly to Claude Opus 4.6. The verification process you designed for one capability level becomes inadequate when the next model reasons differently. Trust has to be rebuilt continuously – and most organizations aren’t building it the first time, let alone iterating.
They’re waiting for someone else to prove it works, which means they’re outsourcing their trust-building to competitors who are running the experiments right now.
Before You Dismiss This as Another “Move Faster” Argument
If you’re reading this and thinking “we have pilots, we have a strategy, we’re investing” – you’re probably not wrong. Most organizations can point to real AI activity. Budget allocated. Tools purchased. Training sessions scheduled. A handful of enthusiastic early adopters producing impressive demos.
That’s not the question. The question is whether that activity is closing the gap – or just making the organization feel like it’s moving while the frontier pulls further ahead.
Here’s what makes this hard to see from inside: your absolute progress can be real while your relative position deteriorates. You’re running pilots. Your competitors are redesigning workflows. You’re upskilling teams on prompt writing. They’re rebuilding job descriptions around AI-native processes. You’re both making progress. The distance between you is growing.
This isn’t about recklessness. Safety-critical domains, regulated industries, organizations with genuine liability exposure – these have legitimate reasons to move at a measured pace. Nobody should deploy AI in clinical decision-making because a competitor did it first.
But “measured pace” and “unchanged pace” are different things. If your decision cycles, your job architectures, and your evaluation criteria haven’t changed in eighteen months, that’s not caution. That’s the gap compounding against you. The organizations I talk to that are genuinely cautious can articulate exactly what conditions need to be met before they move. The ones that are drifting can only articulate why they haven’t moved yet. The distinction sounds subtle. It isn’t.
The Mess Is the Diagnosis
In theory, you diagnose which force dominates and address it. In practice, all four show up simultaneously in the same organization. I keep seeing it in firms that sell AI consulting services – der Zahnarzt hat die schlechtesten Zähne, as the Germans say. The dentist has the worst teeth. Same pattern everywhere. Business processes live in people’s heads. “AI can’t do this” is the default stance. Skills barely extend beyond conversational chatbots. And the organization treats its own AI transformation like another IT project on an 18-month roadmap.
No single inertia force dominates universally. What dominates is the weakest link in your chain from intent → usage → impact.
Roughly, by stage:
If your organization has no coherent AI strategy yet – cognitive inertia at the leadership level is the binding constraint. Nobody has articulated what AI is for in your specific context. The first move isn’t a pilot. It’s a strategy intervention: define 3–5 business outcomes tied to AI and a 12–24 month intent.
If you’re running lots of pilots with little value – routine and structural inertia are binding. AI is bolted onto unchanged workflows. The first move is redesigning one end-to-end workflow where AI isn’t optional – it’s assumed in the process design, the metrics, and the incentive structure.
If you have some wins but can’t scale – cultural and governance inertia are binding. Visible pushback, exceptions, quiet blocking by middle managers who can stall any initiative without ever saying no. The first move is formalizing AI governance and psychological safety around experimentation, then codifying working patterns into organizational standards.
The diagnostic’s job isn’t to find one dominant force. It’s to surface which capability in the chain from vision → data → workflows → skills → governance is currently the binding constraint – and relieve that constraint instead of pushing generically on “adoption.” Relieving one constraint often surfaces the next. That’s not failure. That’s the actual sequence.
The Discipline That Separates Compounders from Drifters
Here’s the pattern among organizations that actually close the Capability-Dissipation Gap: a personal proof standard.
Not benchmarks. Not vendor demos. Not “we attended a workshop.” A structured discipline of testing AI against your actual work – your domain, your workflows, your quality standards – and updating that test with every model release.
Build a set of domain-specific tasks that represent the real work your team does. Run them against current models. Score the output against criteria you define. Document what works and what doesn’t. When the next model drops – and it will, within months – run the same tasks again. The delta tells you exactly what’s newly possible.
This isn’t just for technical leaders. If you’re in sales, test whether AI can generate a client brief that meets your quality bar. If you’re in marketing, test whether it can produce a campaign analysis you’d actually present. If you’re in operations, test whether it can draft a process specification your team would recognize as accurate.
The point: you can’t diagnose organizational drift if you haven’t tested the frontier yourself. Leaders whose AI skills stop at “chatting with ChatGPT” can’t evaluate what’s possible – which means they can’t specify what their organization should be absorbing. The specification gap starts at the top.
Every model improvement lands differently on someone who has been testing systematically versus someone who heard about the announcement on LinkedIn. The first person compounds. The second person drifts. And the gap between them widens with every release cycle.
Your First Move
The question isn’t whether your organization should adopt AI. That conversation is over. The question is which constraint in your absorption chain is currently binding – and whether you’re measuring it or just assuming you’re on track.
Start with the honest diagnostic. Where is your organization in the chain from intent to impact? Are you stuck at strategy (no clear AI intent), at integration (pilots but no workflow change), or at scale (wins but no organizational learning)?
Name the binding constraint. Then relieve it – not with another pilot, not with another tool purchase, but with the structural intervention that matches your specific inertia profile.
And if you’re not sure which force is binding – that’s the diagnosis too. You’re at stage one. Your first move is building the evaluation capability to tell the difference.
The Capability-Dissipation Gap widens every week you spend on the wrong intervention. Or no intervention at all. McKinsey’s data shows organizations that have built AI capabilities see labor productivity grow nearly 5x faster than the global average. The frontier doesn’t wait. Your absorption rate is the only variable you control.
So far in this series, we’ve covered what specification means, why it matters, and how to build it. This article opens a different question: why organizations can’t absorb what’s already available. Next up: the architecture decisions that lock you in before anyone notices.
If your organization is wrestling with which inertia force is actually binding, I’d like to hear what you’re seeing. The pattern is more consistent across industries than you’d expect.

