Irrational Numbers and Unsolved Problems: Pi Governance

Pie for pi day

Irrational Numbers and Unsolved Problems: Pi Governance

Happy Pi Day. March 14th. 3.14. And since we’re celebrating irrational numbers that never quite resolve, let’s talk about AI governance.

π and AI Governance: An Unexpected Parallel

Every year on Pi Day (3/14) we celebrate (your favorite pizza or berry pie) and of course, π (pi), the mathematical constant that represents the ratio of a circle’s circumference to its diameter. What makes π fascinating is that it is irrational and infinite: its digits continue forever without repeating, yet it still plays a foundational role in mathematics, physics, and engineering.

This characteristic offers a useful metaphor for AI governance.

1. Infinite complexity, foundational importance
Just as π contains infinitely many digits, the impacts of artificial intelligence are vast and constantly expanding. We cannot anticipate every outcome of advanced AI systems. Yet, like π in mathematics, governance frameworks must serve as foundational constants that guide how systems are built and deployed.

2. Precision despite incompleteness
In practice, scientists rarely use all the digits of π. A limited approximation of 3.14159 and so on, provides enough precision for most applications.

Similarly, AI governance does not need perfect foresight. Policies, ethical guidelines, and regulatory frameworks can function as practical approximations, evolving over time as technology advances.

3. Iterative refinement
Mathematicians continuously compute more digits of π using better algorithms. Governance works the same way:

  • early principles (ethics guidelines)
  • evolving standards (industry frameworks)
  • formal regulation (laws and international agreements)

Each iteration improves accuracy and oversight, just as new calculations extend π.

4. Universality
π appears everywhere: geometry, probability, wave physics, and cosmology. In the same way, AI governance must be universal, influencing industries ranging from healthcare and finance to defense and education.

The core problem enterprises won’t say out loud.

I’ve been in many enterprise AI conversations over the last several years. The demand for agentic AI is real. The enthusiasm is real. The roadmaps are real.

What is also real, and what doesn’t make it into the press releases, is that most enterprises deploying agentic AI are quietly terrified of it. Like π, the outputs of agentic AI never fully resolve. That’s not a defect. It’s the nature of the thing. The question is whether you’ve built the foundational constant that keeps it governable.

Agentic AI is probabilistic. The same agent, given the same input, on two different occasions, can produce meaningfully different outputs. That’s the nature of large language models. The probability distributions that make them useful, the contextual reasoning, and the ability to navigate ambiguity are the same properties that make their outputs non-deterministic.

And enterprises, particularly in regulated industries, cannot operate on probabilistic systems without deterministic controls. A healthcare organization cannot tell a regulator, “our agent probably made the right call.” A financial institution cannot explain a consequential customer decision with “the model’s confidence was high.” Legal, compliance, operations, risk teams, they all require the ability to say with certainty: this action was evaluated, this action was authorized, this is the documented reason.

For years, my advice in these situations has been honest to the point of being unpopular: if you cannot afford a probabilistic outcome, do not deploy agentic AI. It’s not ready for your environment. Your environment is not ready for it.

But typically, “don’t deploy it” isn’t actually an option. The business pressure, the competitive pressure, the internal mandates, they don’t go away because the architecture is awkward. Enterprises will deploy agentic AI. The question is whether they do it with governance or without it.

That question is what Nomotic was designed as a counterpart to Agentic.

Derived from the Greek nomos (law, rule, governance), Nomotic represents an intelligent governance layer that defines the rules, boundaries, and constraints under which agentic systems operate. 

The solution: a deterministic layer on a probabilistic system.

The insight that drove Nomotic’s architecture is straightforward. You cannot make a probabilistic system deterministic. But you can wrap it in a deterministic governance layer that makes its behavior governable.

The agent stays probabilistic. That’s fine. The governance layer provides certainty about the evaluation of every action the agent attempts. Every decision is evaluated. Every evaluation produces a documented, auditable verdict. Every verdict is reached through a defined, repeatable process. That is the determinism enterprises need. Not in the model. In the governance.

Nomotic achieves this through a three-tier decision cascade that deliberately combines deterministic and probabilistic mechanisms and introduces human judgment exactly where automation cannot provide certainty.

Tier 1 is pure determinism. Hard boundaries. Binary pass/fail against explicit rules. Regulatory prohibitions, absolute authority limits, and hard compliance constraints. These execute first, in microseconds, with no ambiguity. A scope violation doesn’t need a weighted analysis. It needs a veto. This tier provides the certainty that governance requires for the clearest cases.

Tier 2 is probabilistic triage. Actions that clear Tier 1 enter semantic evaluation. Vector similarity search against an embedded governance landscape routes each action based on its proximity to known boundaries. Actions clearly within bounds are fast-tracked. Actions near edges are flagged for deeper evaluation. Actions in unfamiliar territory escalate immediately. The probabilistic layer doesn’t make the final call. It determines how much scrutiny each case requires. This is what makes governance feasible at machine speed.

Tier 3 is deliberative and human-inclusive. Actions that Tier 2 flags as an ambiguous route to targeted deterministic verification, applying only the specific rules relevant to the flagged concern. And where automation cannot produce certainty, the system escalates to human review before the action proceeds. Not after. Before. Human-in-the-loop is not a fallback for when something goes wrong. It’s a designed component of the governance architecture for the cases that require it.

You don’t need every digit of π to build a bridge. You need enough precision for the application. The same logic applies here. Governance doesn’t require perfect foresight. It requires enough determinism to evaluate, authorize, and document every action as it happens. And, learn, improve, and adapt as needed.

Across all three tiers, every action is simultaneously evaluated against 14 independent dimensions: scope, authority, resource consumption, behavioral pattern, impact, stakeholder exposure, incident history, isolation level, timing, precedent, transparency, human oversight status, ethical posture, and jurisdictional compliance. Not a single score. Not allow/deny. Fourteen dimensions that together produce a complete picture of whether this action, in this context, by this agent, is appropriate.

Three tiers. Fourteen dimensions. 3.14159… the digits keep going. So does the governance.

That combination, is what makes it possible to deploy agentic AI in enterprise environments that cannot afford probabilistic outcomes.

The full lifecycle, because partial coverage isn’t coverage.

The three-tier evaluation is the runtime layer. But Nomotic’s governance starts before the agent runs and continues after it finishes.

Pre-governance is the stage in which agents receive a cryptographic identity. Agent Birth Certificates bind each agent’s identity to its governance parameters at initialization. Archetype priors establish behavioral baselines before the first action is taken. A healthcare agent starts with different expected patterns than a financial agent or an operations agent. Governance doesn’t begin with the first action. It begins at agent creation.

Runtime governance is where the three-tier cascade operates, where authority resides, where behavioral fingerprints are maintained, and where trust evolves with every action. Trust starts at a baseline and moves based on observed behavior. Violations cost trust disproportionately. Recovery requires demonstrated consistency. An agent with a history of violations faces more stringent governance. An agent that has earned trust through consistent behavior has proportionally more latitude. This feedback loop is what makes governance adaptive rather than static.

Post-governance is where the record lives. Every governance decision produces a tamper-evident, hash-chained audit record. Not logs. A cryptographic chain of accountability that cannot be edited, and that regulators can actually verify. The audit trail enables counterfactual replay, meaning you can reconstruct exactly what the governance layer evaluated and why, for any decision, at any point in history.

Any governance solution that covers only one of these phases is telling you a partial story. Observability covers post-governance, partially. Policy engines cover pre-governance partially. Output guardrails cover a slice of runtime. None of them covers all three with a continuous, adaptive system that carries behavioral state from one phase to the next.

That continuity is what enables the full picture. An agent’s behavior in post-governance informs its priors in pre-governance for its next deployment. Runtime behavioral fingerprints accumulate into the audit trail that post-governance relies on. The phases are not separate tools. They are stages of a single governance lifecycle.

Installing it is easier than making pie from scratch.

pip install nomotic

Three lines to connect, evaluate, and govern:

from nomotic import Nomotic
nomo = Nomotic.connect()
result = await nomo.execute(tool_fn, **params)

Framework-agnostic. Zero external dependencies. Works with LangGraph, CrewAI, AutoGen, OpenAI, Claude, and any custom agent architecture. More integrations coming soon.

Organizations that have been told agentic AI isn’t ready for their regulated environment can stop waiting. The counterpart exists. The deterministic layer on the probabilistic system exists. The full lifecycle governance infrastructure exists, is production-ready, and any engineering team can have it running before the next LinkedIn post declares the problem unsolved.

Why this took a while to say.

π shows up everywhere: geometry, physics, probability, cosmology. Governance needs to work the same way. Healthcare, finance, operations, defense. Every industry deploying agents needs the same foundational constant underneath.

Pi is irrational. Agentic AI is probabilistic. Neither has to be ungovernable. π teaches us that some constants anchor systems of infinite complexity. In the age of artificial intelligence, responsible governance may become one of those constants.

That’s Nomotic. It’s built. It’s available. Any organization can begin using it today.

Visit Nomotic and Follow Nomotic.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.