Skip to Content
Artificial intelligence

A conversation with Dragoș Tudorache, the politician behind the AI Act

Here’s why he believes the landmark law he helped to shepherd through will change the AI sector for the better.

Dragoş Tudorache
DAINA LE LARDIC/EP via European Union

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Dragoș Tudorache is feeling pretty damn good. We’re sitting in a conference room in a chateau overlooking a lake outside Brussels, sipping glasses of cava. The Romanian liberal member of the European Parliament has spent the day hosting a conference on AI, defense, and geopolitics attended by nearly 400 VIP guests. The day is almost over, and Tudorache has promised to squeeze an interview with me in during cocktail hour. 

A former interior minister, Tudorache is one of the most important players in European AI policy. He is one of the two lead negotiators of the AI Act in the European Parliament. The bill, the first sweeping AI law of its kind in the world, will enter into force this year. We first met two years ago, when Tudorache was appointed to his position as negotiator. 

But Tudorache’s interest in AI started much earlier, in 2015. He says reading Nick Bostrom’s book Superintelligence, which explores how an AI superintelligence could be created and what the implications could be, made him realize the potential and dangers of AI and the need for regulating it. (Bostrom has recently been embroiled in a scandal for expressing racist views in emails unearthed from the ‘90s. Tudorache says he is not aware of Bostrom’s career after the publication of the book, and he did not comment on the controversy.) 

When he was elected to the European Parliament in 2019, he says, he arrived determined to work on AI regulation if the opportunity presented itself. 

“When I heard [Ursula] von der Leyen [the European Commission president] say in her first speech in front of Parliament that there will be AI regulation, I said ‘Whoo-ha, this is my moment,’” he recalls. 

Since then, Tudorache has chaired a special committee on AI, and shepherded the AI Act through the European Parliament and into its final form following negotiations with other EU institutions. 

It’s been a wild ride, with intense negotiations, the rise of ChatGPT, lobbying from tech companies, and flip-flopping by some of Europe’s largest economies. But now, as the AI Act has passed into law, Tudorache’s job on it is done and dusted, and he says he has no regrets. Although the act has been criticized—both by civil society for not protecting human rights enough and by industry for being too restrictive—Tudorache says its final form was the sort of compromise he expected. Politics is the art of compromise, after all. 

“There’s going to be a lot of building the plane while flying, and there’s going to be a lot of learning while doing,” he says. “But if the true spirit of what we meant with the legislation is well understood by all concerned, I do think that the outcome can be a positive one.”  

It is still early days—the law comes fully into force two years from now. But Tudorache believes it will change the tech industry for the better and start a process where companies will start to take responsible AI seriously thanks to the legally binding obligations for AI companies to be more transparent about how their models are built. (I wrote about the five things you need to know about the AI Act a couple of months ago here.)

“The fact that we now have a blueprint for how you put the right boundaries, while also leaving room for innovation, is something that will serve society,” says Tudorache. It will also serve businesses, he says, because it offers a predictable path forward on what you can and cannot do with AI. 

But the AI Act is just the beginning, and there is still plenty keeping Tudorache up at night. AI is ushering in big changes across every industry and society. It will change everything from health care to education, labor, defense, and even human creativity. Most countries have not grasped what AI will mean for them, he says, and the responsibility now lies with governments to ensure that citizens and society more broadly are ready for the AI age. 

“The crunch time … starts now,” he says. 

Join Dragoș Tudorache and me at Emtech Digital London on April 16-17! Tudorache will walk you through what companies need to take into account with the AI Act right now. See you next week!


Now read the rest of The Algorithm

Deeper Learning

A conversation with OpenAI’s first artist in residence

Alex Reben’s work is often absurd, sometimes surreal: a mash-up of giant ears imagined by DALL-E and sculpted by hand out of marble; critical burns generated by ChatGPT that thumb the nose at AI art. But its message is relevant to everyone. Reben is interested in the roles humans play in a world filled with machines, and how those roles are changing. He is also OpenAI’s first artist in residence. 
Meet the artist: Officially, the appointment started in January and lasts three months. But he’s been working with OpenAI for years already. Our senior editor for AI, Will Douglas Heaven, sat down with Reben to talk about the role AI can play in art, and the backlash against it from artists. Read more here.

Bits and Bytes

It’s easy to tamper with watermarks from AI-generated text

Watermarks for AI-generated text are easy to remove and can be stolen and copied, rendering them useless, researchers have found. They say these kinds of attacks discredit watermarks and can fool people into trusting text they shouldn’t. It’s an especially significant finding because many regulations around the world, including the AI Act, are betting heavily on the development of watermarks to trace AI-generated content.  (MIT Technology Review

How three filmmakers created Sora’s latest stunning videos

In the last month, a handful of filmmakers have taken OpenAI’s new generative AI model Sora for a test drive. The results are amazing. The short films are a big jump up even from the cherry-picked demo videos that OpenAI used to tease Sora just six weeks ago. Here’s how three of the filmmakers did it. (MIT Technology Review

What’s next for generative video

Generative video will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix. (MIT Technology Review

Google is considering charging for AI-powered search

In a major potential shake-up to Google’s business model, the tech giant is considering putting AI-powered search features behind a paywall. But considering how untrustworthy AI search results are, it’s unclear if people will want to pay for them. (Financial Times) 

The fight for AI talent heats up 

As layoffs sweep through the tech sector, AI jobs are still super hot. Tech giants are fighting each other for top talent, even offering seven-figure salaries, and poaching entire engineering teams with experience in generative AI. (Wall Street Journal

Inside Big Tech's underground race to buy AI training data

AI models need to be trained on massive data sets, and big tech companies are quietly paying for data, chat logs, and personal photos hidden behind paywalls and login screens. (Reuters

How tech giants cut corners to harvest data for AI

AI companies are running out of quality training data for their huge AI models. In order to harvest more data, tech companies such as OpenAI, Google, and Meta have cut corners, ignored corporate policies, and debated bending the law, the New York Times found. (New York Times)

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.