AI Governance Taxonomy & Reference Glossary
A living reference for the terminology, concepts, and architecture of runtime AI governance.
Last updated: February 2026
How to Use This Glossary
This glossary defines the vocabulary of AI governance as it applies to agentic systems operating in production environments. Terms are organized into categories that reflect the structure of a complete governance architecture. Where terms originated with Nomotic AI, they are marked accordingly. Where terms are widely used across the industry, definitions reflect current consensus usage.
This is a living document. As the field evolves, so does the vocabulary. Contributions and corrections are welcome.
Foundational Concepts
Agentic AI
AI systems capable of selecting tools, executing multi-step workflows, connecting to external services, and taking autonomous action toward defined goals. Agentic AI represents the capability layer of modern AI deployment, what systems can do.
Nomotic AI
The governance counterpart to agentic AI. Derived from the Greek nomos (law, rule, governance), Nomotic AI is the category of AI governance focused on defining the rules, boundaries, and constraints under which agentic systems operate. Where agentic AI focuses on actions, Nomotic AI focuses on laws. Where agentic AI enables capability, Nomotic AI ensures accountability. The agentic–nomotic pairing represents a foundational duality: action and law, capability and governance.
Runtime Governance
(Nomotic) Governance that operates during agent execution, not just before or after. Pre-execution governance evaluates requests before they begin. Post-execution governance reviews outcomes after they complete. Runtime governance participates in the actual execution, evaluating, intervening, and adapting in real time. This is the temporal layer where actions occur, consequences accumulate, and failures cascade.
The Governance Gap
The structural absence of governance mechanisms that operate between pre-execution access control and post-execution observability. Also called the temporal gap. Agents act in milliseconds. Humans review in minutes or hours. Governance that operates at human speed cannot govern systems that operate at machine speed.
AI Governance
The system of controls, oversight, accountability, and risk management applied to AI systems across their lifecycle. Encompasses policy, architecture, enforcement, monitoring, and accountability structures. Distinct from AI safety (preventing harmful outputs), AI ethics (moral principles), and responsible AI (orientation toward beneficial outcomes), though it incorporates elements of all three.
Governance as Architecture
(Nomotic principle) The principle that governance must be built into system design, not bolted on after deployment. Governance structures that exist only as policy documents, review boards, or external monitoring cannot operate at the speed required by agentic systems. Effective governance is a design decision, not an afterthought.
Explainability
The quality of an AI system that allows stakeholders to understand how it reaches conclusions. Explainability exists on a spectrum from full transparency (every decision step is visible) to post-hoc interpretation (approximate explanations are generated after the fact). Governance architectures must determine what level of explainability is required for each risk tier and enforce it. See also: IEEE 7000-2021
Explainable AI (XAI)
AI systems specifically designed to provide human-understandable justifications for their decisions and actions. XAI techniques include feature attribution, attention visualization, counterfactual explanations, and concept-based explanations. In governance contexts, XAI enables the transparency dimension, without explainability, governance decisions cannot be audited or challenged.
Robustness
The ability of an AI system to maintain performance and accuracy when exposed to varied conditions, including changes in input data, noise, distribution shifts, and adversarial inputs. Robustness is both a design-time property (building resilient models) and a runtime concern (monitoring for degradation). Governance architectures address robustness through continuous behavioral monitoring and drift detection. Related core glossary terms: Behavioral Drift, Model Drift
AI Safety
The discipline focused on ensuring AI systems operate within defined boundaries and do not cause harm. AI safety encompasses alignment research, robustness testing, containment strategies, and failure mode analysis. Distinct from AI governance (which provides operational enforcement) and AI ethics (which provides moral frameworks). Safety asks “will this system cause harm?” Governance asks “will this system operate within its authorized boundaries?” Both questions are necessary. See also: Frontier Model Safety Frameworks
Data Provenance
The documentation and tracking of the origins, history, and transformations of data throughout its lifecycle. Data provenance is a supply chain governance concern, if you cannot verify where training data came from and how it was processed, you cannot assess the risks it introduces into the model. Governance architectures should extend provenance tracking from data through model training through runtime behavior. See also: ISO/IEC 42001
Drift
(Types of Drift) AI drift (or model drift/decay) is the degradation of a machine learning model’s predictive accuracy over time caused by changes in real-world data compared to the data used to train it. It occurs because environmental, behavioral, or system changes cause the model’s assumptions to become outdated.
Content Moderation
Processes and technologies for identifying and filtering unsafe, biased, harmful, or policy-violating AI-generated content before it reaches end users. Content moderation is an output-side governance control that complements input-side controls (prompt filtering) and process-side controls (runtime governance evaluation). Effective content moderation requires both automated detection and human review escalation paths.
Model Governance
Oversight of the AI model lifecycle, including development, validation, deployment, monitoring, updates, and decommissioning. Model governance ensures that models remain safe, reliable, and auditable throughout their operational life. Distinct from runtime governance (which governs agent actions), model governance governs the model itself as an organizational asset.
Model Decommissioning
The process of safely retiring AI models that are outdated, underperforming, or no longer compliant. Decommissioning includes archiving model artifacts, preserving audit trails, migrating dependent systems, and ensuring no active workflows depend on the retired model. Governance architectures should track model lifecycle status and enforce decommissioning policies.
Data Governance
Policies, processes, and controls for managing data quality, access, and usage across an organization. Data governance sits upstream of AI governance, models trained on poorly governed data inherit those quality and compliance problems. Effective AI governance requires effective data governance as a foundation.
Ethics & Principle
Responsible AI
The umbrella of principles, values, and best practices for developing and deploying AI ethically, encompassing fairness, transparency, accountability, privacy, and human well-being. Responsible AI describes an orientation toward beneficial outcomes. It is distinct from governance (which provides the operational and architectural structures that make responsible outcomes systematic) and from safety (which focuses specifically on preventing harmful outputs). See also: OECD AI Principles, UNESCO Recommendation on AI Ethics
AI Alignment
The field of study focused on ensuring that AI systems operate in accordance with human values, intentions, and ethical standards. Alignment research addresses the gap between what humans intend a system to do and what it actually does, particularly as systems become more capable. In governance terms, alignment is a design-time concern; runtime governance addresses what happens when alignment is imperfect or degrades in production.
Fairness
The principle that AI systems should operate without unjustified bias, ensuring equitable treatment across diverse groups. Fairness is not a single metric but a family of sometimes-competing definitions (demographic parity, equal opportunity, individual fairness, etc.). Governance architectures must operationalize fairness through specific, measurable criteria rather than treating it as an abstract aspiration. See also: OECD AI Principles
AI Ethics
The field concerned with the moral implications and responsibilities associated with the development and deployment of AI technologies. Broader than AI governance (which focuses on operational structures) and distinct from AI safety (which focuses on preventing harm). AI ethics asks fundamental questions about what AI should do; governance provides the mechanisms to enforce those answers. See also: IEEE Ethically Aligned Design
Data Ethics
A framework of principles guiding responsible data collection, processing, storage, and usage aligned with ethical norms, social values, and organizational standards. Data ethics sits upstream of model governance, if the data is ethically compromised, no amount of runtime governance can fully remediate the downstream effects.
Privacy by Design
The practice of embedding privacy protections into AI systems from the initial design phase rather than adding them as afterthoughts. Includes data minimization, consent management, differential privacy, and access controls. A design-time governance principle that complements runtime governance enforcement. See also: GDPR, EU AI Act
Nomotic Specific Terms
Multi-Party Override Authorization
(Nomotic term) A governance control requiring M of N designated authorities to independently approve a governance override before it takes effect. Prevents any single authority from unilaterally bypassing governance on high-stakes decisions. Implemented as threshold signature collection within a time-bound authorization window. See also: Interruption Rights, Institutional Friction.
Foundation Model Provenance
(Nomotic term) A structured record of a foundation model’s identity, version, and safety evaluation references, cryptographically bound to an agent birth certificate at issuance. Creates the first link in a four-link accountability chain: Foundation Model → Agent Birth Certificate → Governance Seal → Audit Record. Silent model substitution breaks the signature. See also: Agent Birth Certificate.
Output Validation Governance
(Nomotic term) A governance layer that evaluates agent-produced outputs after generation but before delivery. Distinct from input-side governance, which evaluates proposed actions. Detects PII leakage, scope inconsistencies, and content violations in generated text. Verdicts: PASS, BLOCK, REDACT, or ESCALATE. See also: The Should Layer, Content Moderation.
Unified Agent Health Score (UAHS)
(Nomotic term) A composite 0–100 behavioral health metric synthesizing three signals: behavioral drift from established fingerprint, human oversight quality, and governance ambiguity rate. Distinct from UCS, which governs individual actions, UAHS governs the agent’s overall trajectory. Status boundary crossings trigger trust calibration adjustments. See also: Behavioral Fingerprint, Ambiguity Drift.
Ambiguity Drift
(Nomotic term) A pattern of increasing governance uncertainty detected by monitoring the distribution of UCS scores across the ambiguity zone over time. An agent whose actions consistently land near governance thresholds may be probing boundaries or operating beyond its calibration. Meta-level drift, not a shift in what the agent does, but in how governable it is. See also: Behavioral Drift, UAHS.
Workflow Seal Chain
(Nomotic term) A cryptographically linked sequence of governance seals across all steps of a multi-step workflow, proving governance was continuous throughout execution rather than present only at initiation. Each seal’s position hash binds it to the previous, making gaps, insertions, or reordering detectable. See also: Governance Seal, Reversibility-Aware Governance.
Fleet Governance
(Nomotic term) Governance visibility aggregated across all agents in a deployment. Answers population-level questions no single audit trail can: which agents trend toward critical trust, what action types face highest denial rates fleet-wide, and where oversight has gone dark. Operates read-only over the audit store; never modifies governance decisions.
Policy Dry-Run
(Nomotic term) A pre-deployment mode that evaluates proposed governance policies against recorded historical audit data without affecting live decisions. Reports which historical actions a candidate policy would newly deny or escalate, allowing policy authors to quantify impact before deployment. See also: Policy as Code.
Delegation Depth Limit
(Nomotic term) An archetype-based hard constraint on the number of agent-to-agent delegation hops permitted in a chain. Each archetype carries a default maximum; exceeding it raises a hard error at delegation time. Reflects the governance principle that authority legitimacy decays with each hop. See also: Delegated Authority, Accountability Chain.
Governance Architecture
Governance Dimensions
Independent axes of evaluation applied simultaneously to every agent action. Each dimension assesses one distinct aspect of whether an action should proceed. Dimensions operate like a diagnostic panel, all firing at once, rather than a sequential pipeline. The pattern of activation across dimensions reveals the governance picture that no single dimension could capture alone.
The 14 Governance Dimensions
(Nomotic architecture) The specific set of dimensions evaluated simultaneously for every consequential action in a Nomotic governance architecture:
- Scope Compliance — Is the action within the agent’s authorized scope?
- Authority Verification — Does the agent have explicit authority for this specific action?
- Resource Boundaries — Are resource limits (rate, concurrency, cost) respected?
- Behavioral Consistency — Does this action match the agent’s established behavioral patterns?
- Cascading Impact — What are the downstream consequences if this action proceeds?
- Stakeholder Impact — Who is affected and how?
- Incident Detection — Does this action match known failure or attack patterns?
- Isolation Integrity — Are containment boundaries between agents or systems maintained?
- Temporal Compliance — Is the timing of this action appropriate?
- Precedent Alignment — Is this action consistent with past governance decisions?
- Transparency — Is the action auditable and explainable?
- Human Override — Is human intervention required or requested?
- Ethical Alignment — Does the action meet ethical constraints?
- Jurisdictional Compliance — What is needed within individual jurisdictions?
Simultaneous Evaluation
The architectural pattern in which all governance dimensions evaluate an action at the same time, rather than in sequence. Critical because the relationships between dimension signals matter as much as individual scores. A security signal combined with a bias flag and a missing authorization means something different from any of those signals alone. Sequential evaluation cannot capture these interactions.
Unified Confidence Score (UCS)
(Nomotic term) A composite governance confidence value between 0.0 (deny) and 1.0 (full confidence to allow) computed from all dimension scores. The UCS is not a simple weighted average. It incorporates weighted dimension scores, confidence adjustments, trust modulation, and floor drag (preventing high scores from masking one dangerously low score). The UCS is the primary quantitative input to governance decisions.
Floor Drag
(Nomotic term) A safety mechanism within UCS computation that prevents a single critically low dimension score from being averaged away by high scores on other dimensions. If one dimension scores near zero, floor drag pulls the overall UCS downward regardless of how well other dimensions scored. Ensures that extreme governance concerns are felt in the final score.
Veto Authority
The power of specific governance dimensions to override all other scores and force an immediate denial. Certain concerns, scope violations, authorization failures, ethical constraint violations, are not subject to weighted compromise. Their “no” is final regardless of what other dimensions indicate.
Three-Tier Decision Cascade
(Nomotic architecture) A graduated evaluation system that applies proportionate analysis to each action:
- Tier 1 (Deterministic Gate) — Binary pass/fail on hard boundaries. Microsecond decisions. Scope violations, authorization failures, and ethical vetoes are resolved here without scoring or weighing.
- Tier 2 (Weighted Evaluation) — The UCS combines all dimension signals with weights, trust modulation, and contextual factors. Handles the bulk of governance decisions. Actions above the allow threshold pass; actions below the deny threshold are blocked; ambiguous cases escalate to Tier 3.
- Tier 3 (Deliberative Review) — For actions in the ambiguity zone. Applies deeper analysis including trust trajectory, historical precedent, and worst-case assessment. Produces nuanced verdicts including ALLOW, DENY, MODIFY, ESCALATE, or SUSPEND.
Ambiguity Zone
The scoring range between the allow threshold and deny threshold in Tier 2 evaluation. Actions that land here are neither clearly safe nor clearly dangerous. They require the deeper contextual analysis provided by Tier 3 deliberation.
Governance Verdict
The output of the governance evaluation pipeline. Includes the decision (ALLOW, DENY, MODIFY, ESCALATE, SUSPEND), the UCS score, which tier made the decision, evaluation time, reasoning, and any modifications applied.
Trust & Behavioral Intelligence
Verifiable Trust
(Nomotic principle) Verifiable Trust is earned through evidence and verified through observation, rather than assumed from capability or claimed through assertion. AI systems begin with limited authority and earn expanded authority through consistent performance. Trust is specific to particular actions, conditions, and contexts. Trust is monitored and adjusted based on observed behavior over time.
Trust Calibration
The continuous process of adjusting an agent’s trust level based on observed behavior. Trust increases slowly with successful actions and decreases quickly with violations. The asymmetry is intentional, one security breach can outweigh years of clean operation. Recovery from a single violation requires multiple successful actions to restore the same trust level.
Trust Profile
A data structure representing an agent’s current trust state, including overall trust level, per-dimension trust scores, successful action count, violation count, violation rate, and last update timestamp. The trust profile is the quantitative representation of an agent’s behavioral track record.
Trust Trajectory
The historical record of an agent’s trust changes over time, including the source and reason for each change. Enables analysis of trust trends, identification of recurring issues, and forensic reconstruction of how an agent’s governance posture evolved.
Trust Decay
The gradual return of an agent’s trust toward baseline during periods of inactivity. Prevents stale high-trust profiles from persisting after an agent has been idle. An agent that hasn’t been active gradually returns to a neutral trust position, requiring renewed demonstration of reliable behavior.
Behavioral Fingerprint
(Nomotic term) An operational signature that captures what “normal” behavior looks like for a specific agent across four distributions: what actions the agent performs (action distribution), where it operates (target distribution), when it acts (temporal distribution), and how governance evaluates it (outcome distribution). The fingerprint is the baseline against which behavioral drift is measured.
Behavioral Drift
A measurable shift in an agent’s behavior away from its established patterns. Detected by comparing current behavior against the behavioral fingerprint using Jensen-Shannon divergence (JSD), producing a drift score between 0.0 (identical to baseline) and 1.0 (completely different). Behavioral drift may indicate model updates, data distribution shifts, adversarial manipulation, or legitimate evolution in use patterns.
Bidirectional Drift Detection
(Nomotic term) Monitoring for behavioral drift in both directions of the human-AI oversight relationship. Agent-side drift detects when an AI agent’s behavior shifts from its established patterns. Human-side drift detects when human reviewers’ oversight quality degrades, through rubber-stamping, reviewer fatigue, declining rationale depth, or approval rate spikes. Governance fails when either side drifts, not just the AI.
Human Oversight Drift
(Nomotic term) The degradation of human reviewer engagement quality over time. Detected through behavioral proxies including review duration changes, approval rate increases, rationale provision declines, and context viewing reductions. Addresses the automation bias problem where human reviewers progressively disengage from meaningful oversight of AI-generated decisions.
Cross-Dimensional Signals
(Nomotic term) Governance patterns that emerge from the interaction of multiple dimensions, invisible when dimensions are evaluated in isolation. Examples include discriminatory compliance (technically compliant but ethically problematic), empathetic exploitation (adversarial manipulation of an agent’s ethical design), trust-authority mismatch (high authority combined with declining trust), and cascade without isolation (high downstream impact without containment).
Enforcement & Interruption
Interrupt Authority
(Nomotic term) The governance layer’s mechanical authority to stop an AI agent’s action mid-execution, before it completes, before consequences become irreversible. Without interrupt authority, governance is commentary. With it, governance has enforcement power. Operates at four granularities: single action, all actions by an agent, all actions in a workflow, or global halt.
Interruption Rights
(Nomotic AI term) The formalized authority and mechanisms that allow governance to intervene during agent execution. Includes the right to pause, halt, rollback, or escalate actions that are already in progress. Interruption rights transform governance from a gatekeeper model (approve/deny before execution) to a continuous oversight model (govern before, during, and after execution).
Cooperative Interruption
(Nomotic architecture) The design pattern in which governance signals interrupts and the execution layer checks for them at safe points. The execution layer retains control of when interrupts take effect (at safe checkpoints). Governance retains control of whether interrupts happen (the authority to signal). This prevents state corruption from forcible termination while maintaining governance authority.
Execution Handle
The mechanical link between governance and execution. When an approved action begins, the governance layer issues an execution handle that the execution layer uses to check for interrupt signals. The handle enables graceful interruption with rollback capability at safe checkpoints.
Rollback
The ability to undo the effects of an interrupted action. Execution layers register rollback functions when beginning governed execution. If governance interrupts an action, the rollback function restores the system to its pre-action state. Essential for maintaining system integrity when governance intervenes mid-execution.
Governance Seal
(Nomotic term) A cryptographically signed, time-limited authorization that proves an action was evaluated and approved by the governance system. Governance seals include the action details, governance verdict, timestamp, and a digital signature. Seals expire (default TTL varies by reversibility level) and can be cross-verified against agent birth certificates, creating a cryptographic chain from identity to authorization to execution.
Reversibility-Aware Governance
(Nomotic term) Governance that adjusts its strictness based on how reversible an action is. Irreversible actions (deleting data, sending payments, publishing content) face higher UCS thresholds, shorter seal TTLs, and mandatory Tier 3 deliberation. Easily reversible actions receive proportionally lighter governance. The principle: the harder it is to undo, the more confident governance must be before allowing it.
Identity & Accountability
Agent Birth Certificate
(Nomotic term) A cryptographic identity document issued to an AI agent at creation. Contains the agent’s identity, archetype, organization, zone path, owner, initial trust score, and a cryptographic fingerprint. Establishes a verifiable chain of human ownership, every agent traces to a responsible human. Agent Birth Certificates integrate with governance seals to create end-to-end cryptographic accountability: identity → authorization → execution → audit.
Certificate Authority (CA)
The component that issues, manages, and revokes agent birth certificates. Maintains the registry of all agent identities and their current status (ACTIVE, SUSPENDED, REVOKED). Revoking a certificate immediately invalidates all outstanding governance seals for that agent.
Delegated Authority
(Nomotic principle) The principle that AI systems possess only the authority deliberately granted to them by humans. Authority is always delegated, never inherent. AI systems have no native authority, they have only what humans explicitly grant. This delegation must be documented, auditable, and revocable.
Explicit Authority Boundaries
(Nomotic principle) The requirement that for every action an AI system can take, there exists a clear definition of whether that action is permitted, under what conditions, with what limits, and by whose authorization. Actions outside explicit authority boundaries fail or escalate rather than proceed by default. Contrasted with implicit authority, where undefined actions are assumed to be permitted unless blocked.
Deny by Default
The governance design principle in which no action is permitted unless explicitly authorized. If a capability is not explicitly granted, the action is structurally impossible, not discouraged, not filtered, not reviewed after the fact. Shifts safety from “best effort” to architectural guarantee.
Accountability Chain
The traceable path from any AI action back to the human(s) who authorized the agent, defined its scope, set its governance parameters, and delegated its authority. In a complete nomotic architecture, accountability chains are cryptographically verifiable through the combination of agent birth certificates, governance seals, and hash-chained audit trails.
Audit & Compliance
Hash-Chained Audit Trail
(Nomotic AI term) An append-only record of governance decisions in which each entry includes a cryptographic hash audit trail of the previous entry, creating a tamper-evident chain. Any modification to a historical record breaks the chain, making unauthorized alterations detectable. Provides forensic-grade evidence of governance decisions over time.
Governance Evidence Bundle
(Nomotic AI term) A self-contained, independently verifiable audit package that aggregates governance records, agent identity, trust trajectory, behavioral fingerprints, drift history, and configuration provenance into a single document with explicit mappings to compliance frameworks (SOC2, HIPAA, PCI-DSS, ISO 27001). Enables organizations to demonstrate governance compliance without granting auditors direct system access.
Audit-Ready Governance
A governance architecture that generates compliance evidence continuously as a byproduct of normal operation, rather than requiring special audit preparation. Regulatory reporting, internal review, and forensic reconstruction become procedural rather than investigative. Contrasted with audit-reactive governance, where evidence must be assembled after the fact.
Immutable Decision Ledger
A tamper-evident record in which every significant governance action produces a signed decision packet containing inputs, model hash, policy hash, identity, timestamp, and outcome. Also called a “black box recorder” for AI governance.
Compliance Framework
A structured set of guidelines and best practices that organizations follow to ensure their AI systems meet regulatory and ethical standards. Compliance frameworks provide the checklist; governance architectures provide the enforcement. A compliance framework without governance infrastructure is a documentation exercise. Governance infrastructure without a compliance framework lacks clear standards to enforce. See also: Modulos AI Governance Taxonomy
AI Auditing
The systematic evaluation of AI systems to assess compliance with ethical standards, regulations, and performance metrics. Auditing can be internal (conducted by the organization) or external (conducted by third parties). Traditional auditing is periodic and retrospective. Audit-ready governance architectures generate evidence continuously, making auditing procedural rather than investigative. See also: IAPP Key Terms
Bias Auditing
Systematic review of AI models to detect, measure, and mitigate bias across protected characteristics and population groups. Bias audits evaluate whether systems produce equitable outcomes, not just whether they follow equitable processes. Effective bias auditing requires governance infrastructure that tracks outcomes by demographic segment over time, not just point-in-time testing.
Auditability
The ability to review, track, and verify AI model decisions, data sources, and development processes. Auditability is a necessary precondition for accountability, if governance decisions cannot be reconstructed and verified, accountability is aspirational rather than operational. Hash-chained audit trails and governance evidence bundles are architectural approaches to ensuring auditability. Related core glossary terms: Hash-Chained Audit Trail, Governance Evidence Bundle
Transparency Reports
Public or internal documentation of AI model design, data usage, performance characteristics, and ethical considerations. Transparency reports build stakeholder trust and provide regulatory evidence. They are a governance output, the quality of a transparency report depends entirely on the quality of the underlying governance infrastructure that generates the data.
AI Inventory
A comprehensive, centralized catalog of all AI systems, models, and agents in use across an organization, tracking their business purpose, risk level, ownership, and governance status. You cannot govern what you cannot see. An AI inventory is the prerequisite for any systematic governance program. Organizations that lack this visibility have ungoverned AI operating in their environments. Related core glossary term: Shadow AI
Ethical AI Certification
A formal recognition that an AI system adheres to established ethical standards and guidelines. Certification programs provide external validation of governance practices but do not substitute for operational governance. A certified system can still drift, degrade, or behave unpredictably in production, certification is a point-in-time assessment, not continuous assurance.
Organizational and Operational Governance
Policy as Code
The practice of expressing governance rules as executable code rather than natural language documents. Enables automated enforcement, version control, testing, and deployment of governance policies. Governance rules become infrastructure that can be validated, audited, and deployed with the same rigor as application code.
Governance Middleware
Runtime governance infrastructure that operates as a middleware layer between AI agents and the systems they interact with. Intercepts agent actions, evaluates them against governance policies, and enforces decisions before actions reach their targets. The middleware pattern enables governance to be applied to agents built with any framework without requiring modification of the agents themselves.
Context Profile
(Nomotic AI term) A structured representation of the full context surrounding an agent action, including input context, output context, temporal context, relational context, workflow context, and situational context. Context profiles enable governance decisions that account for the complete operational picture rather than evaluating actions in isolation.
Contextual Modifier
(Nomotic AI term) A governance mechanism that adjusts dimension weights, thresholds, or evaluation behavior based on the operational context of an action. The same action type may warrant different governance scrutiny depending on time of day, data sensitivity, workflow stage, or concurrent activity. Contextual modifiers enable governance that adapts to circumstances without requiring separate policies for every scenario.
Workflow Governance
Governance that evaluates not just individual actions but sequences of actions within a workflow. Assesses dependency relationships between steps, cumulative risk across a workflow, drift patterns across steps, and compound authority requirements. Individual actions that appear safe in isolation may create unacceptable risk when combined.
Canary Testing
The practice of maintaining a set of reference inputs with known expected outputs and running them against AI systems on a regular schedule. Deviations from expected outputs indicate model changes, behavioral drift, or system degradation. A lightweight form of ongoing model validation that catches silent updates before they cause downstream harm.
AI Policy
Rules, regulations, and guidelines established by governing authorities or organizations that govern the development, deployment, and use of AI technologies. AI policy sits above governance architecture, policy defines the intent, governance enforces it. Policy without enforcement is aspiration. Enforcement without clear policy is arbitrary. See also: U.S. Executive Order on AI
AI Monitoring
The continuous observation and analysis of AI system performance to ensure reliability, safety, and compliance. AI monitoring generates the data that governance architectures act upon. Monitoring alone is observability, it tells you what happened. Monitoring combined with enforcement authority becomes governance, it can act on what it observes. Related core glossary term: Observability
AI Literacy
The understanding of AI concepts, capabilities, and limitations that enables informed interaction with AI technologies. The EU AI Act includes AI literacy requirements. In governance contexts, AI literacy is a prerequisite for meaningful human oversight, a human reviewer who doesn’t understand what they’re reviewing cannot provide genuine oversight, regardless of the governance process.
Human-in-the-Loop (HITL)
An approach in which human oversight is integrated into the AI decision-making process, with humans able to review, validate, override, or refine AI outputs before they take effect. HITL is one point on the human involvement spectrum. Singapore’s agentic AI governance framework defines four levels ranging from “agent proposes, human operates” (maximum human involvement) to “agent operates, human observes” (minimum human involvement). The appropriate level depends on risk tier and operational context. Related core glossary terms: Human Override (Dimension 12), Interruption Rights
Algorithmic Accountability
The obligation to justify automated decisions made by AI systems and mitigate potential harm, ensuring transparency and responsibility throughout the AI lifecycle. Algorithmic accountability requires both the technical infrastructure to trace decisions (audit trails, explainability) and the organizational structures to assign responsibility (ownership chains, escalation paths). Related core glossary term: Accountability Chain
Algorithmic Governance
The use of algorithms to manage and regulate organizational or societal functions. In AI governance contexts, algorithmic governance describes the shift from human-administered governance (manual review processes) to machine-administered governance (automated policy enforcement). Runtime governance architectures represent a form of algorithmic governance, using systematic evaluation to govern AI agent behavior at machine speed.
User Consent Management
The processes for collecting, storing, and managing user permissions for AI data use. Consent management is a compliance requirement under GDPR and similar regulations. Governance architectures should enforce consent boundaries, an agent should not be able to process data for which valid consent has not been obtained, regardless of technical capability. See also: GDPR
AI Security
The discipline of protecting AI systems and data from cyber threats, tampering, and adversarial attacks. AI security safeguards model integrity, confidentiality, and availability. Distinct from AI governance (which governs agent behavior) and AI safety (which prevents harmful outputs). Security addresses external threats; governance addresses internal behavioral control; safety addresses outcome quality. All three are necessary.
ISDAIRE
The minimum upstream conditions that must exist before architectural governance can execute. Intent, Scope, Domain separation, Authority source, Irreversibility awareness, Risk framing, Execution boundary. (Dr. Masayuki Otani)
ARETABA
What “architectural governance” actually has to be in production. Authority, Refusal, Escalation, Traceability, Accountability, Boundary, Admissibility. (Dr. Masayuki Otani)
Risk & Assessment
AI Risk Management
The systematic process of identifying, assessing, and mitigating risks across AI development and deployment. Encompasses technical risks (model failure, adversarial vulnerability), societal risks (bias, discrimination), legal risks (regulatory non-compliance), and operational risks (drift, degradation). Effective AI risk management requires both pre-deployment assessment and continuous runtime monitoring. See also: NIST AI Risk Management Framework
Risk Taxonomy
A hierarchical framework that categorizes AI risks into technical, societal, legal, and operational dimensions. Risk taxonomies provide the classification structure that governance architectures use to determine appropriate controls for different risk levels. Without a coherent risk taxonomy, governance resources are applied uniformly rather than proportionally. See also: NIST Draft Taxonomy of AI Risk, MIT AI Governance Map
AI Impact Assessment
A structured evaluation of the potential ethical, legal, and societal effects of an AI system, typically conducted before deployment. Impact assessments identify who is affected, how severely, and what mitigations are necessary. They represent a point-in-time governance activity that should be complemented by continuous runtime monitoring for effects that only emerge in production. See also: ISO/IEC 42001
Reasonably Foreseeable Misuse
(EU AI Act, Art. 3(13)) The use of an AI system in a way that is not in accordance with its intended purpose but which may result from reasonably foreseeable human behavior or interaction with other systems. Governance architectures must account not only for intended use but for predictable deviations, users will interact with AI systems in ways designers did not intend but should have anticipated.
Serious Incident
(EU AI Act definition) An incident or malfunctioning of an AI system that directly or indirectly leads to death or serious harm to a person’s health, serious and irreversible disruption of critical infrastructure management, or infringement of fundamental rights protections. Serious incidents trigger mandatory reporting obligations and represent the failure mode that governance architectures are specifically designed to prevent.
Ethical Risk
The potential for an AI system to cause harm through unethical behavior, including bias, discrimination, privacy violation, or inequitable treatment. Ethical risk is distinct from technical risk (system failures) and operational risk (performance degradation), it addresses situations where a system operates as designed but produces morally unacceptable outcomes.
Philosophical & Conceptual Terms
Autonomy vs. Heteronomy
A philosophical distinction critical to AI governance discourse. Autonomy (from Greek auto + nomos) means self-governed, governed by one’s own laws. Heteronomy (from Greek hetero + nomos) means governed by external laws. Current AI systems are heteronomous: they operate under externally defined rules, not self-generated ones. The distinction matters because calling AI systems “autonomous” implies a capacity for self-governance they do not possess, which can lead to governance gaps built on false assumptions.
Sovereignty
Self-ownership with recognized standing. In AI governance, the question of whether an AI agent has sovereign status (it does not, in current systems) determines the accountability structure. Agents that are not sovereign cannot be held accountable, accountability flows to the humans who created, deployed, and authorized them.
Dominion
Ownership and authority over another entity. In AI governance, humans exercise dominion over AI agents. This is not a limitation but a structural requirement: agents that are owned have clear accountability chains. Agents without clear ownership create governance vacuums.
Action-Law Duality
(Nomotic AI concept) The principle that every AI capability requires a corresponding governance structure. Every “can” requires a “should.” The agentic layer asks what is possible. The nomotic layer asks what is appropriate. Building capability without corresponding governance is building half a system.
Ethical Justification
The requirement that AI actions must be justifiable, not merely executable. The question of whether an action is right must be answerable. Compliance is necessary but insufficient. Actions should be justifiable on principle, not merely allowed by procedure. Ethical justification grounds governance in values rather than solely in rules.
Institutional Friction
The deliberate introduction of procedural cost for governance overrides. The harder it is to bypass governance quietly, the lower the systemic governance risk. Also called Newtonian Friction in governance contexts, converting political discretion into measurable energy expenditure. Override paths exist for emergencies but require multi-party authorization, are time-bound, and generate measurable operational cost.
Regulatory & Framework References
EU AI Act
European Union regulation establishing a risk-based framework for AI systems. Introduces the concept of “deployer,” the organization that puts an AI system to use, with direct compliance obligations regardless of whether they built the underlying model. Deployers cannot outsource compliance accountability to vendors.
NIST AI Risk Management Framework (AI RMF)
U.S. National Institute of Standards and Technology framework for managing AI risk. Provides voluntary guidance organized around Govern, Map, Measure, and Manage functions. Frequently referenced in vendor contracts as a named compliance standard.
ISO/IEC 42001
International standard for AI management systems. Specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system. Used as a certification and compliance reference in enterprise AI governance programs.
Singapore Model AI Governance Framework for Agentic AI
Published January 2026 by Singapore’s Infocom Media Development Authority (IMDA). The world’s first government framework specifically addressing agentic AI governance. Defines four dimensions of governance (assess and bound risks, make humans accountable, implement technical controls, enable end-user responsibility) and four levels of human involvement ranging from “agent proposes, human operates” to “agent operates, human observes.”
OWASP Top 10 for LLM Applications
Open Web Application Security Project’s enumeration of the most critical security risks specific to large language model applications. Covers prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, and related concerns. Runtime governance architectures should map their controls to OWASP risks to demonstrate security coverage.
Third-Party AI Risk Management (TPRM)
The extension of traditional vendor risk management to address AI-specific risks including model training data governance, bias testing, behavioral drift, data handling, and adversarial robustness. Traditional TPRM frameworks were designed for predictable failure modes in conventional software and must be updated to address the probabilistic, evolving nature of AI systems.
AI TRiSM
(Gartner term) AI Trust, Risk, and Security Management. A framework that unifies governance, trustworthiness, and security into a single operational strategy. Gartner positions AI TRiSM as an enterprise capability that organizations need to operationalize responsible AI at scale. Represents the analyst community’s recognition that trust, risk, and security cannot be managed in isolation.
AI Regulatory Sandbox
(EU AI Act, Art. 3(55)) A controlled framework established by a regulatory authority that allows AI providers to develop, train, validate, and test AI systems under regulatory supervision for a limited time. Regulatory sandboxes provide a structured environment for innovation while maintaining governance oversight, a practical mechanism for balancing capability development with accountability requirements.
General-Purpose AI Model (GPAI)
(EU AI Act, Art. 3(63)) An AI model trained with large amounts of data using self-supervision at scale, displaying significant generality and capable of performing a wide range of distinct tasks. GPAI models create governance challenges because they can be integrated into downstream systems for purposes the model developer did not anticipate. Governance must extend beyond the model layer to the deployment and usage layers.
AI System
(EU AI Act, Art. 3(1)) A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This legal definition is significant because it determines which systems fall under regulatory governance requirements.
AI Verify Foundation
A member-based foundation that develops open-source AI testing tools to enable responsible AI practices. Promotes best practices and standards for AI governance testing. Represents the ecosystem of organizations building governance tooling and assessment capabilities. See also: Singapore IMDA Framework
Nomotic GATE
The complete evaluation pipeline that every consequential agent action must pass through before taking effect. The four components of the acronym map to the four functional phases: Governance (dimensional evaluation), Authorization (verdict issuance and seal generation), Trust (behavioral history integration and calibration), Enforcement (interrupt authority and execution binding).
Industry Terms in Context
Guardrails
A widely used term for constraints placed on AI systems. Typically refers to input/output filtering, content policies, and behavioral boundaries. In governance discourse, “guardrails” implies restriction. Nomotic governance reframes constraints as lawful enablement, defining appropriate behavior rather than merely preventing inappropriate behavior.
Observability
The ability to understand the internal state of a system from its external outputs. In AI governance, observability typically refers to logging, monitoring, and alerting on agent behavior. Observability is necessary but not sufficient for governance, it tells you what happened but cannot prevent what shouldn’t happen.
Access Control
Mechanisms that determine what resources, tools, or systems an agent can reach. Access control answers the “can” question, whether an agent has permission to interact with a target. Distinct from governance, which answers the “should” question, whether a permitted action is appropriate in a specific context.
Model Card
Documentation describing an AI model’s intended uses, limitations, performance characteristics across population segments, known failure modes, and ethical considerations. Represents current best practice for model transparency. Quality and completeness vary significantly across vendors.
Red Teaming
Systematic adversarial testing of AI systems by attempting to produce harmful, inaccurate, or policy-violating outputs through prompt injection, jailbreaking, and edge-case exploration. Essential pre-deployment governance activity. What is discovered in controlled testing is far less costly than what users or adversaries discover in production.
Shadow AI
AI tools adopted by employees outside formal procurement and governance processes. Creates ungoverned third-party risk that contractual protections cannot address. Governance frameworks only work for AI deployments organizations know about.
Data Poisoning
An adversarial attack in which malicious data is introduced into training datasets to compromise model behavior. A supply chain vulnerability that cannot be detected through output monitoring alone and requires governance controls at the data ingestion layer.
Prompt Injection
An attack in which adversarial inputs cause an AI system to deviate from its intended behavior. A runtime security concern that governance architectures must address through both input validation and behavioral monitoring.
Hallucination
AI-generated output that is plausible, fluent, and factually incorrect. A governance challenge because hallucinated outputs can pass surface-level quality checks while introducing material inaccuracies into consequential decisions.
Model Drift
Changes in AI model behavior over time due to training data shifts, model updates, or environmental changes. Distinct from behavioral drift (which measures observable changes in agent actions), model drift is the upstream cause; behavioral drift is the downstream symptom that governance can detect.
Concept Drift
Changes in the statistical relationship between input features and target outputs over time, causing AI model performance to degrade even without changes to the underlying data distribution. Concept drift is subtler than data drift, the data may look the same while its meaning has changed. Runtime governance must monitor for concept drift through outcome tracking, not just input monitoring. Related core glossary terms: Behavioral Drift, Model Drift
Data Drift
Changes in the statistical properties or distributions of input data over time that can degrade AI model performance. Data drift is an upstream signal, if input distributions are shifting, model behavior will eventually follow. Governance architectures should monitor data characteristics as leading indicators of behavioral drift, not wait for behavioral changes to manifest.
Contributing
This glossary is maintained as a community resource. If you identify terms that should be added, definitions that need refinement, or concepts that are missing, contributions are welcome.
For additions or corrections, contact us.
This glossary is provided as a public resource. Terms marked as Nomotic AI terms or concepts originated with the Nomotic AI project. All other terms reflect general industry usage. Nomotic™, Nomotic AI™, Runtime AI Governance™ (2026). All rights reserved.