The Governance Dimension Nobody Is Measuring

no U turn, AI governance

The Governance Dimension Nobody Is Measuring

There is a question that almost every consequential human institution asks before taking action, often implicitly, sometimes formally, but consistently enough that its absence is considered negligence.

Can we undo this if we are wrong?

Surgeons evaluate reversibility before they cut. Judges weigh it before they sentence. Investors consider it before they commit capital to an illiquid position. Diplomats think about it before they sign treaties. The question isn’t whether the decision is correct. It’s what happens if it turns out not to be.

AI governance frameworks, almost universally, do not ask this question.

They ask whether an action is permitted. They ask whether the agent is authorized. They ask whether the request falls within the defined scope. Some of the more sophisticated ones ask whether the action is appropriate given behavioral history and context. What almost none of them ask, as a distinct, weighted, consequential input to the governance verdict, is whether this action can be reversed if the verdict turns out to be wrong.

That omission is not a minor gap. It is a structural flaw in how most organizations are thinking about AI risk.

And it’s something Nomotic is focused on.

What Reversibility Actually Means

Reversibility is not a binary. It exists on a spectrum, and the governance implications shift significantly across that spectrum.

Some actions are fully reversible with no meaningful cost. A draft document was created and then deleted. A query run against a read-only database. A recommendation generated but not yet acted upon. For these actions, a governance error in the permissive direction has low consequence. If the verdict was wrong and the action shouldn’t have happened, the correction is straightforward.

Some actions are reversible, but at a cost. A transaction that can be refunded, but the customer relationship was affected. An email that can be recalled, but some recipients have already read it. A database record that can be restored from backup, but with two hours of data loss. For these actions, a governance error is recoverable but not free. The cost of recovery should inform the threshold for the original verdict.

Some actions are partially reversible. A communication sent to a subset of a mailing list can be corrected, but the original cannot be unsent. A medical record accessed without proper authorization can be logged, but the exposure has already occurred. A loan denial based on a flawed algorithmic assessment can be reversed on appeal, but the applicant’s immediate needs were not met. Partial reversibility means partial recovery. The governance question is which part cannot be recovered, and whether that partial consequence is acceptable given the confidence level of the verdict.

Some actions are irreversible. A message was sent to a patient informing them of a terminal diagnosis that turned out to be a data error. A financial wire transfer was executed and received. A public statement made on behalf of an organization. A regulatory filing was submitted. These actions, once taken, cannot be untaken. Their consequences exist in the world permanently, regardless of what the governance system decides afterward.

The standard for evidence required to permit an action should scale with its irreversibility. This is not a novel idea. It is how virtually every mature risk management framework in every other domain operates. AI governance has not caught up.

The Speed Problem Makes It Worse

Human decision-making has a built-in buffer for reversibility. Deliberation takes time. Time creates space for reconsideration. The friction of human-speed decisions provides an implicit reversibility check that slows consequential actions down enough that errors can sometimes be caught before they become permanent.

AI systems do not have that buffer. An agent evaluating whether to send a batch of customer notifications, execute a series of transactions, or submit a set of regulatory filings can make thousands of decisions in the time it takes a human to read a single one. The speed that makes AI systems valuable is the same speed that removes the natural reversibility buffer.

This matters because governance errors compound at machine speed in ways they don’t at human speed. A human accounts payable clerk who miscodes a transaction makes one error before someone notices. An AI system with the same miscoding can process an entire month of transactions before the morning reconciliation reveals the problem. The actions were individually small. The aggregate consequence was not. And the aggregate consequence is difficult or impossible to reverse cleanly.

Governance frameworks designed around individual action evaluation without explicit reversibility weighting will systematically underestimate this compounding risk. The individual action looks low-stakes. The population of that action at machine speed is not.

What Reversibility-Aware Governance Looks Like

The practical implementation of reversibility as a governance dimension requires a few specific things that most current frameworks lack.

The first is classification. Every action type needs a reversibility classification before it is evaluated, not during. A governance system that has to reason from scratch about whether a given action is reversible at evaluation time will introduce latency and inconsistency. The reversibility class of an action should be a property of the action type, defined in the behavioral contract, not derived on the fly.

The second is threshold adjustment. The confidence required to permit an action should be a function of its reversibility class. A Tier 2 verdict that would permit a fully reversible action at a confidence score of 0.65 should require 0.85 for a partially reversible action and human confirmation for an irreversible one. The thresholds should be explicit, documented, and enforced, not implicit assumptions about what confidence levels mean.

The third is rollback infrastructure. Knowing that an action is reversible is only useful if the reversal can actually be executed. This requires transactional design, state management, and rollback capabilities that most agentic systems are not built with. An agent granted authority to take reversible actions under a lower confidence threshold, on the assumption that errors can be corrected, must actually be able to correct them. The governance framework and execution infrastructure must be designed together, not independently.

The fourth is proportional human oversight. Not all actions need human review. Human oversight at machine speed is not achievable or desirable. But irreversible actions below a certain trust threshold should require human confirmation as an architectural property, not a policy suggestion. The human-in-the-loop question, which so often gets answered with a dashboard and a prayer, becomes answerable when framed around reversibility: a human needs to be in the loop for irreversible actions above a defined consequence threshold, with a defined response time, and the authority to halt execution.

Why This Has Been Missing

The absence of reversibility as a first-class governance dimension has a straightforward explanation. Most AI governance frameworks were designed by people thinking about classification problems and content safety, where the relevant question is whether an output is harmful, not whether it can be undone. The analytical frame that produced output filtering, toxicity scoring, and bias detection does not naturally extend to the temporal and consequential properties of actions in the world.

Agentic AI changed the relevant question. When AI systems move from generating outputs to taking actions, the consequential properties of those actions, including their reversibility, become central governance concerns. The frameworks have not fully caught up.

The organizations that will govern agentic AI well are the ones that recognize this shift explicitly and build reversibility into their governance architecture before something irreversible happens that they wished they could take back.

The question that mature institutions have always asked before acting is the same question that AI governance frameworks need to be asking, formally, consistently, and with real consequences for the verdict.

Can we undo this if we are wrong?

If the answer is no, the standard of evidence required to permit the action should reflect that. It rarely does today. That is the gap worth closing.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.