Research from MIT Sloan Management Review and Boston Consulting Group found that 70% of companies reported minimal or no impact from their AI initiatives. The primary cause of failure is not technical — it is leadership. Executives who treat AI as an IT project rather than an organizational transformation consistently produce expensive disappointments. The gap between AI's technical capability and organizational readiness is a leadership gap, and closing it requires the same identity-level work that all genuine leadership transformation demands.

I started working at the intersection of AI and leadership before most executives had heard of ChatGPT. Not because I predicted the current moment — I did not — but because I saw early that the leaders who struggled most with technological change were not struggling with the technology. They were struggling with themselves.

Every major technology shift in my 16 years of coaching has followed the same pattern: the technology arrives, executives either panic or dismiss it, organizations launch initiatives that are overscoped and underled, most of those initiatives fail, and the few that succeed are led by people who understood that the real work was human, not technical.

AI is following this pattern with extraordinary precision. And the failures are getting expensive.

What the Research Shows

The MIT Sloan Management Review and BCG joint research program, which has surveyed thousands of organizations over multiple years, consistently finds that roughly 70% of companies report minimal or no value from their AI investments. Their 2023 report identified a specific pattern: organizations that achieved significant value from AI were 3x more likely to have senior leadership that was directly engaged in AI strategy, not just approving budgets.

A 2023 McKinsey Global Survey found that while 55% of organizations had adopted AI in at least one business function, only 27% reported that AI had produced significant bottom-line impact. The survey specifically identified "lack of clear strategy" and "leadership buy-in without leadership understanding" as top barriers.

Gartner's 2024 analysis estimated that through 2025, 85% of AI projects would deliver erroneous outcomes due to bias, algorithms, or the teams running them — a failure rate that has not improved significantly from their earlier projections. Their research pinpointed the gap: organizations were investing in AI technology while underinvesting in the organizational and human changes required to use it effectively.

MIT Sloan and BCG research found that organizations achieving significant AI value were 3x more likely to have senior leaders directly engaged in AI strategy — not just approving budgets, but understanding the technology well enough to make informed decisions about its application.

The Three Leadership Failures

After working with dozens of executives through AI adoption challenges, I see three consistent patterns of failure.

Failure 1: Treating AI as an IT project. This is the most common and most deadly mistake. When a CEO delegates AI strategy entirely to the CTO or CIO, they are signaling that AI is a technology problem. It is not. AI is a business transformation problem that requires technology. The distinction matters enormously. A technology problem gets solved in the IT department. A business transformation requires every function — operations, HR, finance, legal, marketing — to change how they work. That scope of change can only be driven from the top.

Failure 2: Confusing AI enthusiasm with AI literacy. I work with executives who can deliver a compelling keynote about AI's potential but cannot explain how a large language model actually works. They know the vocabulary but not the substance. This creates a dangerous gap: they make strategic decisions about AI deployment without understanding the technology's actual capabilities, limitations, and failure modes. A 2023 Stanford HAI survey found that only 22% of executives felt confident in their ability to evaluate AI claims made by vendors. That means 78% are making million-dollar decisions based on marketing materials.

Failure 3: Ignoring the human architecture. AI does not just change what work gets done. It changes who does it, how they do it, and what skills matter. Every AI deployment is a workforce transformation, whether the organization acknowledges it or not. Leaders who launch AI initiatives without a plan for reskilling, role redesign, and cultural adaptation are building on sand. Harvard Business School professor Marco Iansiti's research consistently shows that the organizations that succeed with AI invest at least as much in organizational change as in the technology itself.

What the Ikigai Aperture Reveals About AI Adoption

When I work with executives struggling with AI strategy, I use the Ikigai Aperture framework to map what is actually happening beneath the surface.

At the Absolute Identity level: many senior leaders built their careers and their identities on deep domain expertise. AI threatens that foundation directly. If a machine can produce analysis in seconds that took you decades to learn, what does that do to your sense of professional worth? This is not a rational concern — it is an identity concern. And until it is addressed, the leader will unconsciously sabotage AI adoption while appearing to support it.

At the Contextual Identity level: the leader who feels competent and confident in traditional strategic planning may feel incompetent and exposed when the conversation shifts to AI. Their contextual identity shifts from "expert" to "novice," and that shift triggers defensive behaviors — over-delegation to "the tech people," performative enthusiasm without substance, or excessive caution disguised as prudence.

At the Lens level: the leader's perceptual filter determines whether they see AI as an opportunity or a threat, whether they see their team's AI anxiety as a problem or a signal, and whether they interpret AI vendor claims accurately or through a fog of wishful thinking or fear.

The executives who succeed with AI are not the ones with the best technology strategy. They are the ones who have done the identity work required to lead through uncertainty without defaulting to either blind enthusiasm or disguised resistance. That work is the same work all genuine leadership transformation requires. — Dr. Dhru Beeharilal

What Successful AI Leaders Do Differently

The leaders I have seen succeed with AI share five specific behaviors that separate them from the majority:

They learn the technology personally. Not at expert level. But deeply enough to ask informed questions, evaluate vendor claims, and understand the difference between what AI can actually do and what marketing says it can do. They invest their own time — hours per week, not hours per quarter — in understanding the tools their organizations are deploying.

They start small and measure rigorously. Instead of launching organization-wide AI transformations, they pick one process, one team, one use case and run a genuine experiment. They define success metrics before deployment, not after. They publish the results — including failures — transparently.

They lead the change, not just sponsor it. They show up in the workshops. They use the tools themselves. They share what they are learning, including their confusion and mistakes. This vulnerability signals to the organization that learning is expected, not just performance.

They invest in people at least as much as in technology. For every dollar spent on AI tools, they spend at least a dollar on training, role redesign, and change management. They understand that the technology is the easy part — the human architecture is where the work actually is.

They stay honest about what they do not know. This is the hardest one. The executives who fail at AI adoption are often the ones who feel they need to project confidence about something they do not understand. The ones who succeed admit what they do not know, surround themselves with people who do, and stay curious rather than performative.

The Leadership Gap Is the Only Gap That Matters

The technology is not the bottleneck. AI capabilities are advancing faster than any organization can absorb them. The gap is in leadership — specifically, in leaders' ability to tolerate uncertainty, to learn publicly, to rethink their own identities in light of technological change, and to drive organizational transformation that touches every function, every role, and every assumption about how work gets done.

That gap cannot be closed by hiring a Chief AI Officer, though that can help. It cannot be closed by attending a conference, though exposure matters. It can only be closed by doing the kind of deep, identity-level work that the best executive coaching provides — the willingness to see yourself clearly, to admit what you do not know, and to lead from learning rather than from certainty.

AI will not replace leaders. But leaders who cannot lead through AI will be replaced by leaders who can.

Key Takeaways