The Companies that Need AI Governance will be the Last to Use It

Pen and Paper, regulations

The Companies that Need AI Governance will be the Last to Use It

There is an irony sitting at the center of the AI governance conversation.

The organizations that face the highest stakes from ungoverned AI are also the organizations least likely to be running significant AI deployments right now. And when AI governance frameworks mature enough to be genuinely useful, these organizations will still be working through procurement cycles, compliance reviews, vendor assessments, and internal approval processes that move on timescales measured in years rather than quarters.

The companies developing governance frameworks are largely doing so for each other. The companies that actually need governance the most are in regulatory purgatory, watching from a distance, waiting for someone else to go first.

The Bureaucracy Problem

A hospital that wants to deploy an AI system for clinical decision support does not simply deploy it. It routes the request through legal review, compliance assessment, privacy impact analysis, security review, vendor risk evaluation, clinical governance committee, and potentially a regulatory filing. Each step has its own timeline. Each committee has its own calendar. Each review generates questions that prompt responses that, in turn, generate follow-up questions.

This is not dysfunction. This is an organization that has learned, through decades of litigation, regulation, and patient harm events, that moving fast in clinical environments has consequences that cannot be undone. The bureaucracy exists because the stakes are high and the history is real.

The governance frameworks being developed in the AI space were largely built by technology companies and deployed by others. Their development velocity and risk tolerance are not calibrated for environments where a wrong decision can result in patient death, regulatory sanctions, or institutional liability spanning decades.

By the time a healthcare organization has completed the internal processes required to evaluate an AI governance framework, the framework being evaluated has been superseded by a newer version. By the time they have approved a vendor, the vendor’s product roadmap has changed. By the time they have completed implementation, the regulatory landscape has shifted again.

The organizations most in need of AI governance are locked in a cycle where the governance conversation is always happening one generation behind the technology.

The Restriction Trend

Something else is happening simultaneously that further complicates the picture.

Organizations are beginning to restrict AI rather than govern it. Firewall blocks on consumer AI tools. Policies prohibiting the use of external language models with proprietary data. IT departments implementing egress filtering that prevents data from reaching AI APIs. Blanket prohibitions on agent deployment pending governance framework development.

This is a rational response to the governance gap. If you cannot govern AI effectively, restricting it is a defensible position. The regulatory and litigation exposure from ungoverned AI is real enough that the risk calculus favors restriction over exposure for many organizations.

The irony is that restriction is itself a form of governance, and in some ways a more honest one than the governance masquerade. An organization that says we have blocked AI tools pending the development of appropriate governance frameworks has a clearer accountability position than one that has deployed AI with policy documentation and an access-control layer and called it ‘governed’.

But restriction does not scale as a long-term strategy. The competitive pressure from organizations that are deploying AI will eventually make the restriction untenable. The question is whether the governance frameworks will be mature enough to support safe deployment by the time the pressure of restrictions becomes unsustainable.

For highly regulated industries, the answer may be no. The frameworks will still be developing. The standards will still be debated. And the organizations will face a choice between restrictions that are no longer viable and deployment without adequate governance.

Regulatory Purgatory as Advantage

Here is the uncomfortable insight that nobody in the governance space wants to say out loud.

The organizations stuck in regulatory purgatory may be doing it right.

The governance frameworks being rapidly deployed in technology-forward organizations are being put in place before the standards are settled, before the regulatory requirements are clear, before the case law on liability has been established, and before the behavioral science of how humans actually interact with AI governance systems over time has produced actionable findings.

Healthcare organizations that have been unable to move fast are not deploying AI governance theater. They are not implementing governance masquerades that satisfy a board conversation but would not survive regulatory examination. They are not accumulating governance debt that will require expensive retrofitting when the real requirements arrive.

They are waiting. And the waiting, however frustrating, means they will implement governance after the standards have matured, after the regulatory requirements have crystallized, after the organizations that moved fast have generated the incident record and the litigation that teaches the field what adequate governance actually requires.

The first movers in AI governance are doing the expensive work of figuring out what governance means through a process of trial and error and correction. The laggards, for once, may benefit from letting someone else pay that tuition.

What This Means for AI Governance as a Field

The field of AI governance is being defined primarily by the organizations and vendors most motivated to shape it, technology companies with existing AI deployments, governance platform vendors seeking market definition, and regulatory bodies responding to incidents that have already occurred.

The organizations most in need of governance are largely absent from that definitional conversation. When they do engage, it is often through comment periods on proposed regulations or through industry association working groups that move slowly enough to be structurally irrelevant to the deployment decisions being made in the technology sector.

The result is a governance framework that may be well-suited to the organizations that developed it and poorly suited to the organizations that need it most. A governance framework designed for a technology company deploying at scale, with a dedicated compliance team and a high risk tolerance, is not the same as the one a hospital needs when deploying an AI system that influences clinical decisions.

We have not fully defined what AI governance is. We have not agreed on what it means. We have not established who needs what level. And the pace of technological evolution means that by the time those questions are settled, the technology will have moved again. The governance frameworks will be chasing a target that keeps moving.

The Slower Path May Be the Better One

The organizations left out of the AI governance conversation may be well-positioned to implement governance effectively when they eventually get there.

Not because they are smarter or more capable. Because they will have the benefit of hindsight that the first movers are generating right now through expensive failures. Because the regulatory requirements will be clearer by the time they implement. Because the standards will be more settled. Because the case law on liability will have established what adequate governance looks like in practice. Because the governance theater will have been exposed and the organizations doing it will have paid the price that makes the lesson legible.

The companies that need AI governance most will be the last to implement it. That is a genuine problem for patient safety, financial system integrity, and critical infrastructure resilience in the near term.

But when they implement, they may implement it better. The bureaucracy that is currently a liability in keeping pace with the technology may turn out to be an asset in avoiding the governance debt that faster-moving organizations are accumulating.

The field just does not know that yet. And neither do they.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Start managing your agents for free.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, is available now!