Skip to Content
Artificial intelligence

Our quick guide to the 6 ways we can regulate AI

Let us walk you through all the most (and least) promising efforts to govern AI around the world.

a row of columns with segments missing or replaced by a digital pattern
Stephanie Arnett/MITTR | Envato

Tech Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what's coming next. You can read more here.

AI regulation is hot. Ever since the success of OpenAI’s chatbot ChatGPT, the public’s attention has been grabbed by wonder and worry about what these powerful AI tools can do. Generative AI has been touted as a potential game-changer for productivity tools and creative assistants. But they are already showing the ways they can cause harm. Generative models have been used to generate misinformation, and they could be weaponized as spamming and scamming tools

Everyone from tech company CEOs to US senators and leaders at the G7 meeting has in recent weeks called for international standards and stronger guardrails for AI technology. The good news? Policymakers don’t have to start from scratch.  

We’ve analyzed six different international attempts to regulate artificial intelligence, set out the pros and cons of each, and given them a rough score indicating how influential we think they are.

A legally binding AI treaty

The Council of Europe, a human rights organization that counts 46 countries as its members, is finalizing a legally binding treaty for artificial intelligence. The treaty requires signatories to take steps to ensure that AI is designed, developed, and applied in a way that protects human rights, democracy, and the rule of law. The treaty could potentially include moratoriums on technologies that pose a risk to human rights, such as facial recognition

If all goes according to plan, the organization could finish drafting the text by November, says Nathalie Smuha, a legal scholar and philosopher at the KU Leuven Faculty of Law who advises the council. 

Pros: The Council of Europe includes many non-European countries, including the UK and Ukraine, and has invited others such as the US, Canada, Israel, Mexico, and Japan to the negotiating table. “It’s a strong signal,” says Smuha. 

Cons: Each country has to individually ratify the treaty and then implement it in national law, which could take years. There’s also a possibility that countries will be able to opt out of certain elements that they don’t like, such as stringent rules or moratoriums. The negotiating team is trying to find a balance between strengthening protection and getting as many countries as possible to sign, says Smuha. 

Influence rating: 3/5

The OECD AI principles 

In 2019, countries that belong to the Organisation for Economic Co-operation and Development (OECD) agreed to adopt a set of nonbinding principles laying out some values that should underpin AI development. Under these principles, AI systems should be transparent and explainable; should function in a robust, secure, and safe way; should have accountability mechanisms; and should be designed in a way that respects the rule of law, human rights, democratic values, and diversity. The principles also state that AI should contribute to economic growth. 

Pros: These principles, which form a sort of  constitution for Western AI policy, have shaped AI policy initiatives around the world since. The OECD’s legal definition of AI will likely be adopted in the EU’s AI Act, for example. The OECD also tracks and monitors national AI regulations and does research on AI’s economic impact. It has an active network of global AI experts doing research and sharing best practices.

Cons: The OECD’s mandate as an international organization is not to come up with regulation but to stimulate economic growth, says Smuha. And translating the high-level principles into workable policies requires a lot of work on the part of individual countries, says Phil Dawson, head of policy at the responsible AI platform Armilla. 

Influence rating:  4/5

The Global Partnership on AI

The brainchild of Canadian prime minister Justin Trudeau and French president Emmanuel Macron, the Global Partnership on AI (GPAI) was founded in 2020 as an international body that could share research and information on AI, foster international research collaboration around responsible AI, and inform AI policies around the world. The organization includes 29 countries, some in Africa, South America, and Asia. 

Pros: The value of GPAI is its potential to encourage international research and cooperation, says Smuha. 

Cons: Some AI experts have called for an international body similar to the UN’s Intergovernmental Panel on Climate Change to share knowledge and research about AI, and GPAI had potential to fit the bill. But after launching with pomp and circumstance, the organization has been keeping a low profile, and it hasn’t published any work in 2023. 

Influence rating: 1/5 

The EU’s AI Act

The European Union is finalizing the AI Act, a sweeping regulation that aims to regulate the most “high-risk” usages of AI systems. First proposed in 2021, the bill would regulate AI in sectors such as health care and education.  

Pros: The bill could hold bad actors accountable and prevent the worst excesses of harmful AI by issuing huge fines and preventing the sale and use of noncomplying AI technology in the EU. The bill will also regulate generative AI and impose some restrictions on AI systems that are deemed to create “unacceptable” risk, such as facial recognition. Since it’s the only comprehensive AI regulation out there, the EU has a first-mover advantage. There is a high chance the EU’s regime will end up being the world’s de facto AI regulation, because companies in non-EU countries that want to do business in the powerful trading bloc will have to adjust their practices to comply with the law. 

Cons: Many elements of the bill, such as facial recognition bans and approaches to regulating generative AI, are highly controversial, and the EU will face intense lobbying from tech companies to water them down. It will take at least a couple of years before it snakes its way through the EU legislative system and enters into force.

Influence rating: 5/5

Technical industry standards

Technical standards from standard-setting bodies will play an increasingly crucial role in translating regulations into straightforward rules companies can follow, says Dawson. For example, once the EU’s AI Act passes, companies that meet certain technical standards will automatically be in compliance with the law. Many AI standards exist already, and more are on their way. The International Organization for Standardization (ISO) has already developed standards for how companies should go about risk management and impact assessments and manage the development of AI. 

Pros: These standards help companies translate complicated regulations into practical measures. And as countries start writing their own individual laws for AI, standards will help companies build products that work across multiple jurisdictions, Dawson says. 

Cons: Most standards are general and apply across different industries. So companies will have to do a fair bit of translation to make them usable in their specific sector. This could be a big burden for small businesses, says Dawson. One bone of contention is whether technical experts and engineers should be drafting rules around ethical risks. “A lot of people have concerns that policymakers … will simply punt a lot of the difficult questions about best practice to industry standards development,” says Dawson. 

Influence rating: 4/5

The United Nations

The United Nations, which counts 193 countries as its members, wants to be the sort of international organization that could support and facilitate global coordination on AI. In order to do that, the UN set up a new technology envoy in 2021. That year, the UN agency UNESCO and member countries also adopted a voluntary AI ethics framework, in which member countries pledge to, for example, introduce ethical impact assessments for AI, assess the environmental impact of AI, and ensure that AI promotes gender equality and is not used for mass surveillance. 

Pros: The UN is the only meaningful place on the international stage where countries in the Global South have been able to influence AI policy. While the West has committed to OECD principles, the UNESCO AI ethics framework has been hugely influential in developing countries, which are newer to AI ethics. Notably, China and Russia, which have largely been excluded from Western AI ethics debates, have also signed the principles.  

Cons: That raises the question of how sincere countries are in following the voluntary ethical guidelines, as many countries, including China and Russia, have used AI to surveil people. The UN also has a patchy track record when it comes to tech. The organization’s first attempt at global tech coordination was a fiasco: the diplomat chosen as technology envoy was suspended after just five days following a harassment scandal. And the UN’s attempts to come up with rules for lethal autonomous drones (also known as killer robots) haven’t made any progress for years. 

Influence rating: 2/5

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.