The Internet Connected the World. Will AI Isolate It Again?

brick wall, divider

The Internet Connected the World. Will AI Isolate It Again?

The internet was supposed to make borders irrelevant.

Information would flow freely. Commerce would cross jurisdictions without friction. A person in Lagos could communicate with a person in Seoul the same way they could communicate with a neighbor. The network’s architecture was fundamentally flat. There was no built-in mechanism for a government to say this information stays here, this data does not leave, this transaction is permitted in this country but not that one.

For a while, that flatness held. And then governments began to realize that the network their citizens depended on was largely controlled by other countries, ran on infrastructure they did not own, processed data they could not access, and was governed by legal frameworks that did not reflect their values or serve their interests.

The response was not to maintain the openness. It was to build walls.

AI is accelerating that process. And the walls being built now are higher, less permeable, and more deliberately constructed than anything that emerged from the internet governance debates of the past thirty years.

Three Governance Frameworks, Three Different Worlds

The EU AI Act establishes a risk-based regulatory framework that applies to AI systems operating in Europe regardless of where they are built. High-risk systems require conformity assessments, technical documentation, continuous risk management, and demonstrably operational human oversight. The framework is detailed, enforceable, and explicitly extraterritorial in its reach. A company in California building AI systems used by European customers is subject to EU AI Act requirements, whether or not it has a single employee on European soil.

China has developed a parallel framework that shares some surface vocabulary with the EU approach but reflects fundamentally different governance values. Content that threatens national unity, social stability, or the leadership of the Communist Party requires specific controls. Generative AI systems must align outputs with socialist core values. Data localization requirements are extensive. A system that is compliant with EU law may be prohibited under Chinese law, and vice versa, not because of technical differences but because the governance frameworks encode incompatible social and political values into the architecture of AI.

The United States has no comprehensive federal AI governance framework. What exists is a patchwork of sector-specific regulation overlaid with state-level initiatives that vary significantly. California’s approach differs from Texas’s. The federal government’s approach differs by agency. An organization operating nationally navigates different governance expectations depending on the sector in which it operates, the state of its customers, and the federal agency with jurisdiction over its activities.

These are not variations on the same framework. There are three different answers to the question of what AI governance is for, who it protects, and what values it encodes.

The Agent Transaction Problem

For most of the internet era, jurisdictional complexity was manageable. A website served content globally. A platform operated its servers wherever costs and laws were favorable. The user was in one country. The server was in another. The data might be in a third. Courts and regulators argued about which law applied, but the underlying transactions were simple enough that the arguments, however contentious, were tractable.

Agentic AI breaks this model in a specific way.

An AI agent initiating a transaction lacks clear jurisdiction. The agent might be instantiated on a cloud provider in Virginia, operating under a behavioral contract registered in Delaware, executing a purchase on behalf of a user in Germany, transacting with a merchant in Singapore, using a model trained in the United States on data sourced globally. The transaction crosses five jurisdictions in milliseconds. The governance requirements of each jurisdiction may be incompatible with those of the others.

Under the EU AI Act, the agent’s actions affecting European users trigger European requirements regardless of where the agent runs. Under Chinese data sovereignty requirements, any data that touches Chinese infrastructure is subject to Chinese law, regardless of where the transaction originated. Under U.S. financial regulation, the transaction may trigger requirements based on the currencies involved or the nature of the purchase, regardless of where the parties are located.

Multi-agent chains multiply this. An orchestrating agent in one jurisdiction delegates to a subagent in another, which calls a tool hosted in a third, which processes data from a fourth. The accountability chain that governance requires becomes genuinely difficult to establish when the technical chain of delegation crosses multiple jurisdictions with incompatible governance requirements.

This is not a problem that better technology solves. It is a problem that requires governance coordination across jurisdictions that currently have strong incentives not to coordinate.

AI Sovereignty and the Wall-Building Instinct

Sovereignty in the AI context means something specific. It means a country’s ability to control the AI systems operating within its borders, to ensure that the data those systems process stays under national jurisdiction, and to prevent dependence on AI infrastructure controlled by potentially adversarial foreign actors.

These are legitimate concerns. The concentration of foundational AI capabilities in a small number of American technology companies creates genuine dependencies for every other country. A government that cannot run its critical AI systems without accessing infrastructure it does not control has a real sovereignty problem worth addressing.

The response has been predictable. Data localization requirements. Domestic AI development programs. Requirements that AI systems used in government operations be hosted on national infrastructure. Investment in national AI champions who can compete with foreign providers. Each of these responses is individually rational from a national security perspective.

Collectively, they are rebuilding the infrastructure of separation that the internet spent three decades dismantling.

This is not hypothetical. The Chinese internet is already substantially isolated from the global internet, operating its own platforms, AI systems, and governance frameworks, with limited points of controlled contact with the outside. The EU’s approach is less isolating but creates regulatory friction for non-European AI systems, potentially leading to effective market segmentation. India is developing AI sovereignty frameworks. The Gulf states are investing in national AI infrastructure. The pattern is consistent across political systems and development levels.

The pattern is not limited to international actors. In May 2026, Utah became the first U.S. state to target VPN use itself, requiring websites to verify users physically located in the state regardless of whether they use a VPN to mask their location, and prohibiting platforms from sharing instructions on how to use one. The law is technically unenforceable, but the precedent it sets is significant. Governments are no longer just regulating content. They are reaching into the tools people use to maintain privacy on an open network.

The internet connected the world by effectively setting the marginal cost of global communication to zero. The AI governance frameworks being built now deliberately add friction back into that connection with regulatory authority.

Moving Backward or Forward Differently?

The honest answer to whether AI is isolating the world again is: it depends on what you mean by connected.

The internet connected the world through information flow. It did not prevent the concentration of economic and political power in a small number of actors who controlled the infrastructure. It did not produce the equitable access its proponents promised. It did not resolve the tensions between national sovereignty and global openness that its architects largely ignored.

The governance response to AI is partly a correction to that. Countries that spent two decades watching their citizens’ data flow to foreign servers controlled by foreign corporations, with limited recourse when those systems caused harm, are now building the regulatory infrastructure to reassert control. That is a legitimate project.

The risk is that the reassertion of control produces genuine fragmentation rather than governed openness. A world in which AI systems cannot easily cross borders, where data cannot flow between jurisdictions without friction, where the foundational models and infrastructure in each major jurisdiction are incompatible with those in others, is a world that has traded the governance failures of the open internet for the innovation failures of a closed one.

Historically, the periods of greatest human progress have been periods of relatively open exchange of information, people, goods, and ideas. The periods of greatest stagnation have been periods of enforced isolation. The question the AI governance moment is posing, without fully articulating it, is whether the right response to the governance failures of global AI is to govern globally or to isolate nationally.

The global governance response requires international coordination that has proven extraordinarily difficult to achieve. The national isolation response is achievable and is happening. It may be the worst answer to the better question.

What Organizations Operating Globally Need to Understand

For organizations building and deploying AI systems across multiple jurisdictions, the practical consequences of this fragmentation are already evident.

Compliance with the EU AI Act for high-risk systems requires documentation, oversight, and evidence standards that are genuinely demanding. Compliance with Chinese AI regulations requires content controls and alignment with political values that are incompatible with the openness most Western organizations assume. Compliance with the U.S. patchwork requires tracking which sector-specific requirements apply to which products in which states.

An AI agent conducting transactions across these jurisdictions must comply with the governance requirements of all three simultaneously. The behavioral contract governing the agent must be legible to governance frameworks that speak different regulatory languages, encode different values, and maintain incompatible data sovereignty requirements.

This is not a compliance challenge that can be addressed with a single governance framework. It requires understanding that the governance infrastructure for a globally operating agent is itself a multi-layered system, protocol-level identity and attribution that can be read by any jurisdiction, governance configurations that can be adapted by jurisdiction, and audit trails that can satisfy the evidence requirements of each applicable framework.

The alternative is to operate differently in each jurisdiction. Which is to say, to accept the fragmentation and build for a world in which agents are not globally interoperable but jurisdiction-specific.

That world is already forming. The organizations that recognize it early will deliberately build for it. The organizations that do not will discover it when an agent transaction crosses a border it was not designed to cross, and produce a governance failure that none of their frameworks were designed to catch.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Start managing your agents for free.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, is available now!