When Does AI Governance Actually Happen?
Ask ten people what AI governance means, and you will get ten different answers. That is not unusual for an emerging field. What is unusual is that the answers don’t just differ in emphasis or priority. They describe fundamentally different activities, happening at fundamentally different times, aimed at fundamentally different problems.
It’s no different than trying to define what an AI agent is, hidden by the current practice of agent washing.
This isn’t a vocabulary dispute. It’s a signal that the field hasn’t yet agreed on what it’s actually trying to do.
After asking this question of practitioners, researchers, and executives across the industry, one pattern emerged. People don’t just disagree on the definition of AI governance. They disagree on when it happens. And some of them argue that governing AI itself is the wrong frame entirely. That the real job is governing what AI does to people, to organizations, to outcomes.
Those disagreements aren’t noise. They’re pointing at something real. There may not be one discipline called AI governance. There may be three.
The First School: Governance Happens Before the AI Runs
The pre-governance school has the strongest institutional backing. This is where most regulatory frameworks live. It’s where compliance teams feel most comfortable. And it maps cleanly onto how organizations have governed other enterprise systems for decades.
Pre-governance is about what you decide before deployment. What policies apply to this system? What data was it trained on? What bias testing was conducted? Who approved it for production? What documentation exists? Who is accountable for its outcomes?
The EU AI Act is largely a pre-governance instrument. It asks organizations to conduct risk assessments, maintain technical documentation, and register high-risk systems before they operate. The NIST AI Risk Management Framework is organized around pre-deployment activities: govern, map, measure, and manage. The assumption running through both is that critical governance work occurs in the design and approval phases, and that a well-governed system is one that was properly assessed before it was released.
This school has the advantage of being legible to existing organizational structures. Legal teams understand approval processes. Compliance teams understand documentation requirements. Procurement teams understand vendor assessments. Pre-governance fits into processes that organizations already know how to run.
The limitation is equally clear. Pre-governance assumes that the risks you identify before deployment are the risks that will matter in production. For deterministic systems, that assumption holds reasonably well. For probabilistic AI systems that interact with unpredictable inputs, adapt to user behavior, and produce outputs that no evaluation process fully anticipated, the assumption quickly breaks down. You can govern what you expect. You cannot pre-govern what you didn’t.
The Second School: Governance Happens While the AI Runs
The runtime governance school starts from a different premise. If you want to govern a system, you need to be present while it operates. Not reviewing its outputs afterward. Not assessing its design beforehand. Present. With the authority to intervene before consequences become irreversible.
This is the school that argues pre-governance is necessary but not sufficient. A policy document doesn’t stop a bad action. An approval process doesn’t catch the edge case that emerges six months into production. The temporal gap between when AI systems act and when humans review those actions is precisely where the most consequential governance failures occur.
Runtime governance requires infrastructure that most organizations haven’t built. An agent that evaluates every action against behavioral constraints before execution. An interrupt mechanism that can halt a system mid-operation when a threshold is crossed. A trust calibration layer that adjusts oversight based on what the system has demonstrated, not what it was designed to do. A continuous assessment of whether the system’s behavior today matches the behavioral contract that was defined for it.
Here is where an interesting fracture appears within this school. Some practitioners argue that runtime governance means governing the AI directly. The model itself. The action before it executes. Others argue that this definition is too narrow, and that AI operates too quickly and too opaquely for meaningful runtime intervention at the model level. For this second group, runtime governance means closely monitoring the outputs and interactions of AI systems to intervene at the organizational level before harm accumulates.
Both positions agree that governance can’t wait for the post-mortem. They disagree about how close governance needs to be to the model.
The Third School: Governance Has Nothing to Do With the AI Itself
The third school is the most provocative and, in some circles, the most influential.
This school argues that governing AI is a category error. AI is a tool. Tools don’t require governance. The humans who use tools require governance. The organizations that deploy tools require governance. The outcomes that tools produce require governance. But instrumenting a model with behavioral constraints and calling that governance is to mistake the mechanism for the mission.
When IBM says AI governance is about making AI systems safe and ethical, this school pushes back. Data isn’t ethical by default. Models aren’t ethical by default. Ethics is a property of decisions people make about how systems are designed, deployed, and constrained. Governance, on this view, is the organizational and societal structure that holds those decisions accountable.
In this framing, post-governance is not what happens after a model runs. It is the entire frame within which AI deployment is evaluated. Did this system produce equitable outcomes? Did it operate within the organization’s stated values? Did the people affected by its decisions have recourse? Did the organization respond appropriately when problems emerged? These are governance questions that can only be answered in retrospect, by examining what the system actually produced in the world, not what it was designed to produce in a test environment.
This school tends to draw practitioners from policy, ethics, and organizational behavior. They are less concerned with the technical mechanisms of model governance and more concerned with the accountability structures that determine whether AI deployment serves human ends. Their governance framework includes regulatory compliance, stakeholder impact assessment, redress mechanisms, and organizational accountability for AI-driven decisions.
Their limitation is the mirror image of the pre-governance school. Post-governance, like pre-governance, cannot prevent harm in real time. It can identify it, respond to it, and build accountability structures that change behavior over time. But the patient who received the wrong AI-generated diagnosis, the loan applicant who was algorithmically denied without recourse, the worker whose performance was evaluated by an opaque system with no appeal, those outcomes have already occurred by the time post-governance examines them.
Three Schools, One Problem
The interesting thing about these three positions is that each one is correct about what the others miss.
Pre-governance advocates are right that runtime controls without proper design-time governance produce systems that are technically constrained but not thoughtfully governed. You can instrument a poorly designed system with excellent runtime controls and still produce systematically biased or harmful outcomes that no behavioral rule was designed to catch.
Runtime governance advocates are right that pre-governance without runtime enforcement produces policy documents that describe what should happen, but cannot prevent what does. A governance framework that has no mechanism for real-time intervention is a compliance artifact, not an operational control.
Post-governance advocates are right that both of the above can be executed perfectly at the technical level while still failing the humans and organizations affected by AI decisions. Governance that doesn’t measure its own outcomes against its stated values is governance that has optimized for process rather than purpose.
The temptation is to declare one of these the real AI governance and dismiss the others as adjacent concerns. That temptation should be resisted. Organizations that govern only pre-deployment will be surprised by production. Organizations that govern only at runtime will optimize for operational control while missing organizational impact. Organizations that govern only by outcomes will learn too slowly to prevent the harms they’re measuring.
The more useful question is not which school is right. It is whether the people in your organization responsible for AI governance are actually covering all three phases, or whether they have unconsciously adopted one school’s frame while calling it the complete picture.
Most have. Which means the gaps are real, present, and probably consequential.
The good news is that naming the three schools is the first step toward covering them. You cannot govern what you haven’t defined. And you cannot define AI governance as a unified discipline until you are honest about the fact that it isn’t one yet.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.