AI is borderless — but laws are not. How are different regions trying to govern it?
Artificial intelligence crosses borders effortlessly. Data flows globally, models are trained on international datasets, and applications can spread worldwide in seconds. But law and regulation are still tied to nation-states, each with their own priorities and values. The result is a patchwork of approaches — some compatible, some in tension, and none yet universal.
The European Union: Risk and Rights
The EU has taken the most ambitious step with its AI Act. This law categorises systems by risk level: minimal, limited, high, or unacceptable. High-risk systems, such as those used in healthcare, education, or policing, face strict requirements for transparency, oversight, and human accountability. Systems deemed unacceptable — like social scoring — are banned outright.
The EU’s approach is rights-centred, reflecting its broader tradition of embedding human dignity and privacy into law.
The United States: Industry-Led, Fragmented
The US has no single federal AI law. Instead, regulation is spread across agencies and sectors. Much of the framework relies on industry-led standards, with voluntary codes of conduct and guidelines. While this fosters innovation, it also leaves gaps in accountability. Recent executive actions have begun pushing for more federal oversight, but the landscape remains fragmented.
China: Control and Stability
China’s governance model emphasises state oversight and social order. AI laws there focus heavily on censorship, compliance, and surveillance. Developers must register algorithms with the government, and content-generating systems face strict rules to align with state priorities. For China, governance is as much about political stability as technological safety.
The Global South: Risks of Dependence
Many countries in the Global South do not yet have comprehensive AI regulations. This creates a risk of becoming regulation takers — adopting standards set elsewhere, often by the EU, US, or China. While this ensures access to global markets, it also raises concerns about sovereignty and whether imported frameworks reflect local needs or values.
Patchwork Quilt or Global Treaty?
These differences leave the world at a crossroads. Should AI be governed as a patchwork of regional frameworks, each reflecting local values? Or is there a need for a global treaty — a shared foundation for how humanity governs a technology that ignores borders?
So far, international coordination remains limited. The future of AI governance may depend not just on law, but on whether states can find common ground in defining what risks are worth taking — and what values must never be compromised.
