Help Chris Hood rank among the world’s top CX leaders—vote now.

Why Nomotic Governance Must Prioritize if Agents Should

Jurassic Park, should?

Why Nomotic Governance Must Prioritize if Agents Should

In Jurassic Park, Ian Malcolm delivers a warning that has only grown more urgent: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

The dinosaurs escaped. The park collapsed. Not because the science failed, but because nobody built constraints into the system.

That single word, “should,” is now the defining question in AI. Not whether enterprises will deploy agents. Whether those agents will know when not to act.

The Gap Between Can and Should

Agentic AI is fundamentally about action. These systems perceive their environment, reason through information, plan approaches, and execute decisions. They access databases, process transactions, communicate with users, and trigger workflows. The technical achievement is remarkable.

But remarkable isn’t the same as responsible.

An agent that can access sensitive data still needs to ask whether it should in this specific context. An agent that can process a refund still needs to evaluate whether it should be given in these particular circumstances. An agent that can make a commitment still needs to determine whether it has the authority to do so.

This isn’t a question humans answer once during deployment. It’s a question the agent must answer continuously, in real time, before every action.

Agentic AI is about actions. What systems can do.

Nomotic AI is about laws. What systems should do.

Laws for Agents

Nomotic AI provides that missing layer.

The term derives from the Greek nomos, meaning law, rule, or governance. Where agentic AI focuses on action, nomotic AI focuses on authority and constraint. It operates as runtime governance within the agent’s own decision-making process.

Think of it as the difference between ability and permission. A capable agent has abilities. A governed agent has abilities bounded by permissions. Every time the agent prepares to act, the nomotic layer evaluates whether the action is authorized. Does the context support it? Are the conditions met? Should this proceed?

This isn’t enterprise governance debating AI strategy in quarterly meetings. This is runtime governance built into the agent itself. The “should” question gets asked and answered before every execution.

Agentic AI is about what systems can do. Nomotic AI is about what they should do.

Why Agents Need Laws

Without nomotic governance, agents operate on a dangerous assumption: if they can do something, and a request triggers it, they should proceed. Capability equals authorization.

This assumption fails constantly.

A request technically falls within the agent’s abilities but violates policies nobody thought to encode. An edge case arises where the action is permitted in general but inappropriate right now. A manipulation attempt tricks the agent into doing something it should never do.

In each case, the agent had no law telling it to stop. It could only tell it to go.

Gartner estimates that over 40 percent of enterprise agentic AI projects will be canceled by 2027 due to inadequate risk controls. These failures won’t stem from technical shortfalls. The agents will work beautifully. They’ll fail because nobody gave them laws governing when and how to use their abilities.

They’ll do exactly what they can. The problem is that “able” and “appropriate” aren’t the same thing.

How Nomotic Governance Works

Traditional governance relies on static rules configured at deployment. Permissions are set. Policies are documented. The agent operates within fixed boundaries.

Static rules fail because agents operate in dynamic environments. The same action might be appropriate in one context and harmful in another. Rules written for anticipated scenarios can’t address situations that actually emerge.

Nomotic governance is contextual, adaptive, and continuous.

Contextual means the governance layer understands intent, not just actions. An agent accessing customer data to resolve a service issue looks very different from the same agent accessing the same data following a manipulation attempt. Same action, different context, different ruling.

Adaptive means the agent’s authority adjusts based on observed behavior. Reliable performance earns expanded trust. Unexpected behavior triggers increased scrutiny. The laws respond to evidence rather than remaining frozen.

Continuous means governance happens at runtime, not just deployment. Every action passes through evaluation. Every execution requires authorization. The agent never acts without first determining that it should.

Architectural Flexibility

Nomotic governance can be positioned in multiple ways depending on the level of control required.

Before the agent acts: Governance intercepts requests before execution, evaluating whether the action should proceed, proceed with modifications, or escalate to human review. Nothing executes without passing through governance first.

image

After the agent acts, Governance evaluates outcomes to assess whether the action aligned with policy and whether similar actions should be handled differently. This approach prioritizes learning over prevention.

image

Parallel to the agent: Governance monitors in real time as the agent operates, intervening mid-workflow if behavior deviates from expectations. Actions proceed while maintaining the ability to course-correct.

image

Wrapping the agent: Governance forms an envelope around the agent. All inputs and outputs pass through the nomotic layer. Nothing enters or exits without evaluation.

image

Most mature implementations combine these patterns. High-risk actions require pre-execution gates. All actions feed post-execution analysis. Real-time monitoring catches emerging issues. The architecture depends on the stakes.

Govern, Authorize, Trust, Evaluate

Nomotic governance operates through four functions.

The government establishes the rules. What boundaries exist? What policies apply? How do those rules evolve as the agent’s abilities expand?

Authorize grants permission. What actions are permitted? Under what conditions? Within what limits? Authorization is explicit rather than assumed.

Trust calibrates authority. Has the agent demonstrated reliability? Should its permissions expand or contract based on what it’s actually done?

Evaluate measures alignment. Are the agent’s actions producing appropriate outcomes? Are the laws working as intended? What needs adjustment?

These functions operate continuously, ensuring that capable agents remain governed agents.

The Question Every Agent Must Answer

AI systems will become more powerful, more dynamic, and more consequential. The question isn’t whether organizations will deploy them.

The question is whether those agents will have laws.

Agentic AI gives systems the power to act. Nomotic AI gives them the judgment to pause and ask “should I?” before they do.

Ability to act. Laws to govern. Every agent needs both.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Schedule a free call to start your AI Transformation.

Book me to speak at your next event.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 40 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in April 2026.

author avatar
Chris Hood

×