“The AI bubble is going to burst. AI is going to be the downfall of humanity. AI is going to replace humans in the workplace.”
These arguments often appear in public discussion, and frequently reference governance, but don’t necessarily look at how societies are actually choosing to govern the technology as it develos.
This article follows a previous piece on Global AI Governance written as part of the Human AI Symbiosis project, but takes a different direction.
Instead of a static overview, it aims to serve as a collection of growing resources that follow developments while expanding the scope to include national approaches from a broader range of countries
The European Union
The European Union’s AI Act came into force on the first of August 2024, although in November 2025 it was reported that full implementation isn’t expected until 2027.
Similar to GDPR, it is extraterritorial, applying to both AI systems developed within the EU and to those based elsewhere when they affect EU users.
The Act uses a risk-based framework with the strictest requirements applying to high-risk AI systems, while limited-risk systems are subject to lighter transparency obligations. Minimal-risk systems remain largely unregulated.
The EU has also defined unacceptable-risk systems that are banned outright, covering a wide range of prohibited functions and reflecting its emphasis on fundamental rights and product safety
The European Union has created a dedicated site for their AI Act which covers compliance information, resources and a high-level summary.
The United States
The United States has no comprehensive federal legislation, preferring innovation over regulation, however there are several states, such as New York and California, that have introduced their own laws.
The White House has released a twenty-eight page white paper outlining their three priorities: accelerating AI innovation, building American AI infrastructure and leading in international AI diplomacy and security.
They encourage the private sector to self regulate and have proposed witholding funding from states whose regulations are deemed “burdensome”.
China
In December 2025 it was reported that China is adopting a phased approach to AI regulation with the goal of iterative, state-led oversight.
Even though they stepped back from pursuing a comprehensive AI law they have also drafted rules for regulations for AI with human-like interaction.
They have also proposed the implementation of a global body to coordinate regulation and have shown support for the UN playing a role in AI governance.
Japan
Japan’s 2024 whitepaper begins explicitly stating their objective of becoming the world’s most AI friendly country, and it represents the second stage following an initial whitepaper in 2023.
As yet they haven’t created any comprehensive AI statute and have instead, in 2025, passed various legislative acts that set business guidelines, promote research and development, and clarify how existing laws apply to AI.
Their approach is presented as a feedback loop, where strengthening infrastructure and competitiveness is paired with investment in AI literacy, human resources, and risk response, using a multi-layered model that prioritises soft governance alongside minimum necessary hard law.
The United Kingdom
There are currently no AI-specific statutes in the United Kingdom, but the IAPP reported that a bill is expected to be introduced in May 2026.
But, despite the lack of statutes, the UK does have regulators and guidance frameworks as well as laws pertaining to various areas such as data protection, privacy and employment law, allowing for a regulator-led model.
There has also been parliamentry discussion about the impact it could have in various industries.
Previously a whitepaper was proposed by Rishi Sunak’s government but its future remains uncertain under the current Labour government.