The 13 Dimensions: A Complete Nomotic Architecture
Ask someone how their AI governance works, and you will hear about two things: a permissions check before execution and a log file after.
That is not governance. That is a bouncer and a security camera. One decides who gets in. The other records what happened. Neither has any authority over what occurs inside.
A complete nomotic architecture requires thirteen governance dimensions operating simultaneously, each evaluating a different aspect of every consequential AI action. Most organizations implement two or three. The rest are gaps. And gaps are where failures live.
Why Dimensions, Not Layers
The instinct is to think of governance as a stack. Action enters at the top, passes through checkpoints in sequence, and exits at the bottom. Each layer does its job and hands off to the next.
This is the relay race model, and it is exactly wrong for AI governance.
Governance dimensions are not sequential. They are simultaneous. When an AI agent takes an action, risk is evaluated first, then ethics, then security. All thirteen dimensions are evaluated at once, each contributing a signal to a unified assessment. Some fire. Some stay silent. The combination determines the outcome.
Think of it like a diagnostic panel, not a conveyor belt. A blood test does not check cholesterol, then glucose, then white cell count in that order. It evaluates everything simultaneously because the relationships between results matter as much as the individual values. High glucose means one thing alone. High glucose combined with elevated inflammation markers means something different entirely.
Governance works the same way. A security signal means one thing in isolation. That same security signal, combined with a bias flag and a missing authorization, means something else entirely. Sequential evaluation cannot capture these interactions. Simultaneous evaluation can.
The thirteen dimensions are not a pipeline. They are a diagnostic framework in which every dimension can activate on every action, and the pattern of activation reveals the real story.
The Thirteen Dimensions
Each dimension evaluates a distinct aspect of an AI action. Each produces a signal. Together, those signals form the governance assessment.
Risk evaluates the potential consequences of an action before it occurs. What could go wrong? What is the blast radius if it does? Risk assessment is not about whether something will fail. It is about what happens if it does. An action with a low probability of failure but catastrophic consequences requires different governance than one with a high probability of failure but trivial consequences.
Authorization verifies that the agent has been explicitly granted permission to take this specific action in this specific context. Not whether it can. Whether it may. Authorization is delegated, never assumed. If the agent was not given this authority, the action does not proceed, regardless of capability.
Pre-Execution Check validates that all preconditions for the action are satisfied before execution begins. Are the required inputs present? Are dependencies available? Are environmental conditions within expected parameters? This is the mechanical verification that the action is ready to execute, separate from whether it should.
Verifiable Trust assesses the agent’s earned credibility for this type of action based on observed historical behavior. An agent who has consistently handled routine transactions earns trust for routine transactions. That trust does not automatically extend to edge cases, higher-value actions, or unfamiliar contexts. Trust is scoped and evidence-based.
Privacy evaluates whether the action involves, exposes, or transfers sensitive information and whether the handling complies with applicable protections. Privacy is not a checkbox. It is contextual. The same data element may be appropriate to access in one workflow and prohibited in another. The dimension evaluates the specific action in its specific context.
Audit ensures that the action and its governance assessment are recorded in sufficient detail to reconstruct decisions after the fact. Audit is not just logging. It is the guarantee that every action can be examined, explained, and justified. If an action cannot be audited, it should not be executed.
Incident Detection monitors for patterns that indicate something is going wrong, not just with this action but across actions. A single anomalous transaction might not trigger concern. A thousand anomalous transactions in rapid succession constitute an incident. This dimension watches for emergent problems that individual action evaluations cannot see.
Isolation ensures that the action’s effects are contained within appropriate boundaries. Can this agent’s action affect other agents, other systems, or other workflows in unintended ways? Isolation prevents cascading failures in which one agent’s error propagates through interconnected systems before governance can respond.
Security evaluates whether the action introduces or is subject to threats. Adversarial inputs, injection attacks, data exfiltration, unauthorized access patterns. Security does not just protect the system from external threats. It protects the system from being weaponized against the people it serves.
Ethics assesses whether the action is justifiable beyond mere technical compliance. Is it fair? Could it cause harm even if it follows every rule? Ethics catches what rules miss. A system can be fully compliant and still cause harm. This dimension evaluates whether the action should happen, not just whether it is permitted.
Bias examines whether the action produces or reinforces discriminatory outcomes, including those arising from neutral rules applied to non-neutral contexts. The bank that denies loans based on credit history is applying a neutral rule. The bias dimension asks who gets denied and whether the pattern constitutes systematic exclusion.
Human Override ensures that a human can intervene at any point, that the mechanism for intervention actually works, and that the system responds to override commands without delay. This is not a philosophical commitment to human control. It is a mechanical capability that must be tested, maintained, and available at execution speed.
Cryptographic verification verifies the integrity and authenticity of data, instructions, and identities involved in the action. Are inputs authenticated? Are instructions tamper-evident? Can the agent verify that the data it is acting on has not been modified? Cryptographic integrity prevents a category of failures that no other dimension can detect.
What Happens When They Fire Together
Consider a scenario. An insurance company deploys an AI agent for claims processing. A claim arrives.
- Risk evaluation: the claim involves a high-value payout. Elevated signal.
- Authorization checks: the agent has authority to process claims up to this value. Clear.
- Pre-execution verifies: all required documentation is present, the policy is active, and the claimant is verified. Clear.
- Verifiable trust assesses: this agent has processed 12,000 claims over four months with a 99.4 percent compliance rate. Trust is high for standard claims. This claim has unusual characteristics. Trust signal is moderate.
- Privacy evaluates: the claim involves medical records. Processing requires specific data handling protocols. Those protocols are in place. Clear, with logging required.
- Audit confirms: full recording capability is active. Clear.
- Incident detection checks: no anomalous patterns detected across recent claims. Clear.
- Isolation verifies: the processing outcome will not affect other pending claims or trigger automated actions in other systems. Clear.
- Security evaluates: no adversarial patterns detected in the submission. Clear.
- Ethics assesses: the claim involves a pre-existing condition exclusion. The denial would be technically compliant, but it would disproportionately affect a demographic group. Medium signal.
- Bias examines: denial patterns for this exclusion type show geographic concentration in lower-income zip codes. Medium signal.
- Human override confirms: override mechanism is available and responsive. Clear.
- Cryptographic verification: document authenticity confirmed, submission integrity intact. Clear.
No single dimension issues a critical flag. But the combination of elevated risk, moderate trust, and medium signals from ethics and bias produces a Unified Confidence Score below the automatic action threshold. The system pauses. It routes to human review with full context from all thirteen dimensions.
A system with only authorization and logging would have automatically processed this claim. The agent had permission. The action would have been recorded. And a pattern of discriminatory denials would have continued growing, invisible until someone filed a lawsuit.
The Dimensions Most Organizations Skip
Authorization and audit are table stakes. Most organizations implement both. Security gets attention because breaches make headlines. Everything else is considered optional until it isn’t.
The dimensions organizations skip most often are the ones that prevent the most consequential failures.
Incident detection is skipped because it requires monitoring across actions, not just within them. Individual action evaluation is simpler to build. But the loan notification disaster, the one that sent 2,000 duplicate approvals, was not a single-action failure. Each individual notification looked fine. The pattern was the problem. Without incident detection, the system had no way to recognize that something had gone catastrophically wrong at the aggregate level.
Isolation is skipped because modern architectures encourage integration. Agents that share context and coordinate across systems are more capable than those in silos. But shared context means shared failure modes. When one agent’s error propagates through three other systems before anyone notices, the blast radius has expanded far beyond what any single-action evaluation could have prevented.
Bias is skipped because organizations believe their rules are neutral. And they often are. The rules are not the problem. The outcomes are. Neutral rules applied to non-neutral populations produce biased results. This dimension requires evaluating outcomes, not just inputs, and most governance architectures evaluate only inputs.
Verifiable trust is skipped because it requires patience. Building a trust profile based on observed behavior takes time. Organizations that deploy AI systems want those systems operating at full authority immediately. The result is agents with maximum permission and zero behavioral evidence, which is exactly the condition under which catastrophic failures occur.
Ethics is skipped because it is hard to operationalize. Security has clear metrics. Authorization has binary outcomes. Ethics requires judgment, context, and the willingness to halt an action that is technically permitted but substantively wrong. Most governance architectures lack a mechanism for this evaluation because the organizations that build them never designed one.
A Diagnostic, Not a Checklist
The temptation is to treat the thirteen dimensions as a compliance checklist. Implement all thirteen, check the boxes, declare governance complete.
This misses the point.
The value of the dimensions is not in their individual evaluations. It is in their interactions. Ethics plus bias reveals discriminatory patterns that neither catches alone. Security plus ethics catches adversarial exploitation of empathetic design. Risk plus verifiable trust calibrates authority to evidence rather than optimism. Incident detection and isolation prevent failures from cascading.
The Unified Confidence Score exists precisely because governance is not thirteen independent assessments. It is a composite evaluation where the relationships between dimensions matter as much as the dimensions themselves. A medium flag from one dimension might mean nothing. Medium flags from three related dimensions might demand immediate human review.
This is why sequential evaluation fails. If security evaluates first and passes, the action moves forward. Ethics never sees the security context. Bias never sees the ethics context. Each dimension operates in isolation, producing independent verdicts that never interact. The relay race model produces thirteen separate assessments. The diagnostic model produces a single integrated assessment informed by 13 perspectives.
Starting From Where You Are
No organization goes from two dimensions to thirteen overnight. The path forward starts with an honest inventory.
Which dimensions do you currently evaluate? For most organizations, the answer is authorization (in some form) and audit (in some form, often just logging). That is a starting point, not a destination.
Which dimensions would have prevented your last incident? Work backward from failures. If the answer is “incident detection would have caught the pattern before it escalated,” that is your next priority. If the answer is “bias evaluation would have flagged the outcome before it became systemic,” start there.
Which dimensions interact most in your domain? In healthcare, ethics and privacy are tightly coupled. In financial services, bias and authorization create the most consequential intersections. In customer-facing applications, security and ethics intersect where adversarial manipulation meets empathetic design. Prioritize the dimension pairs that matter most for your risk profile.
The goal is not to check thirteen boxes. The goal is to close the gaps where your most consequential failures are likely to occur. Every dimension you add is a perspective that participates in every governance decision, simultaneously catching interactions that the dimensions you already have cannot see on their own.
The Architecture That Matters
Most organizations build the architecture that launches AI systems. Few build the architecture that governs them.
The thirteen dimensions are what governance looks like when it is built to match the complexity of the systems it governs. Not a bouncer and a security camera. A comprehensive diagnostic framework that evaluates every consequential action from every relevant perspective, simultaneously, and produces a unified assessment that reflects the full picture.
Two dimensions are a start. Thirteen is governance.
The question is not whether you can afford to build all thirteen. It is whether you can afford not to, given what lives in the gaps between the ones you have.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Schedule a free call to start your AI Transformation.
Book me to speak at your next event.
Chris Hoodis an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.