Skip to Content
Artificial intelligence

The AI myth Western lawmakers get wrong

Plus: How a bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing.

November 29, 2022
A woman's face surrounded by symbols of credit ratings.
Stephanie Arnett/MITTR; Getty

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

The trouble is, it's “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support.  

But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.   

Deeper Learning

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Research company OpenAI has built an AI that binged on 70,000 hours of videos of people playing Minecraft in order to play the game better than any AI before. It’s a breakthrough for a powerful new technique, called imitation learning, that could be used to train machines to carry out a wide range of tasks by watching humans do them first. It also raises the potential that sites like YouTube could be a vast and untapped source of training data. 

Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, such as Meta’s chief AI scientist, Yann LeCun, think that watching videos will eventually help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and Bytes

Meta’s game-playing AI can make and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around on a map. The game requires players to talk to each other and spot when others are bluffing. Meta’s new AI, called Cicero, managed to trick humans to win. 

It’s a big step forward toward AI that can help with complex problems, such as planning routes around busy traffic and negotiating contracts. But I’m not going to lie—it’s also an unnerving thought that an AI can so successfully deceive humans. (MIT Technology Review

We could run out of data to train AI language programs 

The trend of creating ever bigger AI models means we need even bigger data sets to train them. The trouble is, we might run out of suitable data by 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization. This should prompt the AI community to come up with ways to do more with existing resources. (MIT Technology Review)

Stable Diffusion 2.0 is out

The open-source text-to-image AI Stable Diffusion has been given a big facelift, and its outputs are looking a lot sleeker and more realistic than before. It can even do hands. The pace of Stable Diffusion’s development is breathtaking. Its first version only launched in August. We are likely going to see even more progress in generative AI well into next year. 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.