AI Agents Are Not Going to Fix the Fact That Most Companies Can't Execute Basic Strategy
March 18, 2026
I want to be honest about something before we get into this. When I first heard companies talking about AI agents as the solution to their operational problems, my first thought wasn't skepticism. My first thought was: these are the same companies that couldn't get their teams to read a one-page strategy document. And I mean that literally.
We have a decades-old, thoroughly documented crisis in business strategy execution. The software industry has been circling this problem for years, throwing new tools at it, and it has not gotten better. The numbers are staggering and they have been staggering for a long time. Research from Kaplan and Norton, the people behind the Balanced Scorecard, puts the strategy execution failure rate at up to 90%. Harvard Business Review puts it at 67%. Pick your study - the range is somewhere between 60% and 90% of companies failing to turn their strategic plans into actual results. Business intelligence tools haven't fixed it. CRM platforms haven't fixed it. Project management software hasn't fixed it. And AI agents are not going to fix it either.
That is my position.
What's Actually Happening With AI Agents Right Now
Here's where we are in early 2026. The AI agent market is enormous - projections put it somewhere between $47 billion and $100 billion by 2030, depending on which analyst you read. In a PwC survey of 300 senior executives from May 2025, 88% said they planned to increase AI-related budgets in the next 12 months specifically because of agentic AI. Seventy-nine percent said AI agents were already being adopted at their companies.
That sounds like a success story. Except it isn't. Not really.
Gartner came out in June 2025 with a prediction that should have stopped more people in their tracks: over 40% of agentic AI projects will be canceled by the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls. Gartner's own analyst put it plainly - "most agentic AI propositions lack significant value or return on investment, as current models don't have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time."
S&P Global research found that 42% of companies abandoned most of their AI initiatives in 2024, up dramatically from 17% the year before. The average organization scrapped 46% of AI proof-of-concepts before they ever reached production. A Carnegie Mellon benchmark called TheAgentCompany - which gave AI agents realistic office tasks like browsing internal websites and coordinating with simulated coworkers - found that the best agent they tested successfully completed only about 24% of tasks autonomously.
And Gartner quietly estimates that of the thousands of vendors marketing themselves as agentic AI providers, only about 130 are real. The rest are what Gartner calls "agent washing" - rebranding existing chatbots, RPA tools, and basic automation as AI agents without the substance to back it up.
So we have companies buying into a $47 billion market where most projects fail, most vendors are exaggerating their capabilities, and the technology itself - even at its best - can't autonomously complete a quarter of realistic work tasks. And companies are responding to this by increasing their budgets.
Gerald asked me once, while we were standing in the Costco parking lot waiting for the shuttle, why people keep buying things they've heard are unreliable. I told him it's because the alternative is admitting the problem is them. He thought about that for a while and then said he wanted the rotisserie chicken. I thought about it the rest of the drive home.
The Execution Problem Is Not a Technology Problem
Here is what the data on strategy execution actually shows, and this is where I want to stay for a minute because I think people skip past it too quickly when they're excited about new technology.
McKinsey found that 45% of nearly 800 executives reported that their strategic planning processes failed to track the execution of strategic initiatives at all. Not that they tracked it badly. That they didn't track it. At all. Harvard Business Review research found that fewer than 33% of senior executives' direct reports clearly understand the connections between corporate priorities. Another study found that 60% of leaders think less than 20% of the workforce has even a basic understanding of company strategy and can explain it.
Sixty percent of leaders believe that four out of five of their employees cannot explain what the company is trying to do. That is the environment into which people are deploying AI agents.
An AI agent cannot fix the fact that your middle managers haven't bought in. It cannot fix the fact that your departments are coordinating poorly - which 30% of companies cite as their single greatest challenge in execution. It cannot fix the fact that 61% of organizations, according to the Economist, already acknowledge they struggle to bridge the gap between strategy formulation and day-to-day implementation. These are human problems. They are leadership problems. They are culture problems.
What an AI agent can do is automate tasks. Real, narrow, specific tasks - updating records, routing tickets, generating first drafts of emails, processing documents. That has genuine value in the right context. I'm not dismissing it. But automating tasks inside a broken execution environment doesn't fix the environment. It just means the broken work gets done faster.
PwC's own survey, which was bullish on AI agents overall, acknowledged that broad adoption doesn't always mean deep impact - that many employees are using agentic features to speed up routine tasks, which is "a meaningful boost in productivity, but it stops short of transformation." Even the optimistic survey had to admit this. Reports of full adoption often reflect excitement about what agentic capabilities could enable, not evidence of widespread transformation.
The Pilot Theater Problem
There's a specific dynamic playing out right now that I think deserves its own name. Let's call it pilot theater.
Companies stand up an AI agent pilot. It's impressive in a demo. Someone gets a promotion for running it. The pilot gets shown to the board. Nobody asks hard questions about production readiness, integration with legacy systems, or what happens when the agent encounters an exception it wasn't designed for. Then the pilot quietly gets shelved six months later when real-world requirements show up: security reviews, compliance checks, integration with systems that don't have clean APIs, exception-heavy workflows that the agent can't navigate.
Kore.ai described this pattern clearly: many enterprises "poured money into agent pilots to keep pace with the narrative: 'Yes, we're doing AI.'" The experiments are quick to start, impressive to watch, and easy to showcase. But they often fall apart when real-world requirements show up.
Chris walked into the office last month holding a printout of some AI vendor's case study claiming 35% productivity gains and 20-30% cost reductions. He seemed genuinely excited. It's hard to argue with Chris when he's excited about something - he's very earnest about it - but I read the methodology section later and the sample was three companies, two of which were Oracle and Microsoft. I did not say anything. I just filed it.
The McKinsey State of AI report from late 2025 found that meaningful enterprise-wide bottom-line impact from AI use "continues to be rare." The companies achieving significant value - their definition was EBIT impact of 5% or more - represented about 6% of respondents. Six percent. And what distinguished those companies wasn't that they had better AI. It was that they had fundamentally redesigned workflows and treated AI as a catalyst for organizational transformation. In other words, the companies getting real value from AI agents already had strategy execution figured out before the agents arrived.
What Vendors Are Selling vs. What Companies Need
The vendor landscape right now is genuinely chaotic in ways that make evaluating any of these tools difficult. Salesforce rebranded most of its core platform around Agentforce 360 at Dreamforce 2025. Workday launched Illuminate AI agents for HR and finance. ServiceNow launched an AI Agent Orchestrator alongside thousands of pre-configured agents. Microsoft rolled out Agent 365 at Ignite 2025. Adobe reported that 99% of the Fortune 100 had used AI capabilities within Adobe applications.
Every major enterprise software company is now an AI agent company. Every one of them.
I've written before about what happens to vendor relationships when the underlying business case gets shaky. The renewal conversations get awkward. The account managers get creative with how they frame ROI. You start seeing a lot of "the value is in the platform" language that doesn't map to any specific outcome your team can point to.
That's where a lot of companies are headed with AI agents. They've signed the deals. The agents are technically deployed. And now someone needs to explain in a quarterly review why the strategy execution metrics haven't improved.
Because they won't have improved. Not if the underlying problem is that the workforce doesn't understand the strategy, or that leadership isn't tracking execution, or that departments aren't communicating across units. Mismanaged strategy execution already costs companies up to 10% of their annual revenue. An AI agent that automates a broken process recovers none of that.
Tory had a whole theory about this last week - something about how the right tool changes everything - and he said it with real conviction, the way he always does. I didn't push back. He's been through a lot lately. But I thought about all the CRMs and project management platforms and business intelligence dashboards that came before this conversation, and how the strategy execution numbers haven't moved in twenty years despite all of them. The data hasn't changed. The tools keep getting replaced.
The Uncomfortable Truth About the 6%
The companies actually getting enterprise-wide value from AI - that 6% in McKinsey's research - share one characteristic: they started with a functioning execution environment. They had clear ownership of strategic priorities. They had measurement systems. They had leadership that was actually invested in implementation. And then they added AI on top of that foundation.
That's not a technology story. That's a discipline story. And discipline is not something you can purchase through a vendor.
IBM researcher Marina Danilevsky said something that stuck with me: "It's quite a statement to make when we haven't even yet figured out ROI on LLM technology more generally." She was talking about the claim that 2025 was the year of the AI agent. She's right. We were rushing to declare a winner in a race that hadn't finished, and most of the runners hadn't made it past the starting line.
I made Gerald's chicken and rice casserole the other night - the one he's had since before we were married - and I was thinking about how that recipe works because every step is executed correctly in order. You cannot skip the step where you sear the chicken first. You cannot rush the resting time. The ingredients are not complicated. The discipline of the process is what makes it work. I've brought the same casserole to the office three times and everyone always asks what's in it, like the answer is going to be something surprising.
Strategy execution is the same. The ingredients are not mysterious - clear ownership, measurable milestones, leadership that stays engaged through implementation rather than just planning, communication that actually reaches frontline employees, review cadences that are actually kept. None of this requires AI. All of it requires discipline that most companies demonstrably do not have, based on thirty years of consistent data.
I'm Not Anti-Agent. I'm Anti-Delusion.
I want to be precise here, because I'm not arguing that AI agents are useless. For specific, well-defined, high-volume repetitive tasks - document processing, customer support routing, data enrichment, certain kinds of research aggregation - they can be genuinely useful. In lead generation workflows, for instance, where the task is narrow and measurable, there's real value available if you approach it without the transformation narrative attached.
What I'm arguing against is the specific claim that AI agents are going to solve the strategy execution problem. They are not. The strategy execution problem is that people don't know what the strategy is, leaders don't track whether it's happening, and organizations don't hold anyone accountable when it isn't. An AI agent that can autonomously complete 24% of office tasks - in ideal benchmark conditions - is not the solution to any of those problems.
The companies buying AI agents right now without first diagnosing why their strategy isn't executing are going to join the 42% that already abandoned most of their AI initiatives in 2024. They're going to join the 46% whose proof-of-concepts never made it to production. And when those projects get quietly canceled, someone will write a memo about change management and insufficient user training, and nobody will mention the part where the company still doesn't have a strategy its employees can explain.
That's not a technology failure. That was always going to be the outcome.
The tools are not the problem. They never were. Stephanie once suggested we buy a new project management platform because the current one wasn't capturing our strategic goals clearly enough. She'd already expensed four others. I gently pointed out that the issue was that we hadn't agreed on what the strategic goals were. She looked at me like I'd said something very strange. She's not wrong that better tools can help. She's just wrong about the order of operations.
What I keep watching is companies deploy agents into the same environment that broke the last six tools, then act surprised when the agents don't fix it either. They've adopted them. The strategy problem is still there, exactly where they left it.