Skip to Content
Artificial intelligence

AI is at an inflection point, Fei-Fei Li says

The renowned AI researcher shares her thoughts on the hard problems that lie ahead for the field. 

November 14, 2023
Fei-Fei Li standing in front of several red robot arms
Courtesy Photo

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

“This moment in AI is an inflection moment,” Fei-Fei Li told me recently. Li is co-director of Stanford’s Human-Centered AI Institute and one of the most prominent computer science researchers of our time. She is best known for creating ImageNet, a popular image data set that was pivotal in allowing researchers to train modern AI systems. 

Two things have happened, Li explains. Generative AI has caused the public to wake up to AI technology, she says, because it’s behind concrete tools, such as ChatGPT, that people can try out for themselves. And as a result, businesses have realized that AI technology such as text generation can make them money, and they have started rolling these technologies out in more products for the real world. “Because of that, it impacts our world in a more profound way,” Li says. 

Li is one of the tech leaders we interviewed for the latest issue of MIT Technology Review, dedicated to the biggest questions and hardest problems facing the world. We asked big thinkers in their fields to weigh in on the underserved issues at the intersection of technology and society. Read what other tech luminaries and AI heavyweights, such as Bill Gates, Yoshua Bengio, Andrew Ng, Joelle Pineau, Emily Bender, and Meredith Broussard, had to say here

In her newly published memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, Li recounts how she went from an immigrant living in poverty to the AI heavyweight she is today. It’s a touching look into the sacrifices immigrants have to make to achieve their dreams, and an insider’s telling of how artificial-intelligence research rose to prominence.  

When we spoke, Li told me she has her eyes set firmly on the future of AI and the hard problems that lie ahead for the field. 

Here are some highlights from our conversation. 

Why she disagrees with some of the AI “godfathers” about catastrophic AI risks: Other AI heavyweights, such as Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, have been jousting in public about the risks of AI systems and how to govern the technology safely. Hinton, in particular, has been vocal about his concerns that AI could pose an existential risk to humanity. Li is less convinced. “I absolutely respect that. I think, intellectually, we should talk about all this. But if you ask me as an AI leader… I feel there are other risks that are what I would call catastrophic risks to society that are more pressing and urgent,” she says. Li highlights practical, “rubber meets the road” problems such as misinformation, workforce disruption, bias, and privacy infringements.  

Hard problems: Another major AI risk Li is concerned about is the increasingly concentrated power and dominance of the tech industry at the expense of investment in science and technology research in the public sector. “AI is so expensive—hundreds of millions of dollars for one large model, making it impossible for academia. Where does that leave science for public good? Or diverse voices beyond the customer? America needs a moon-shot moment in AI and to significantly invest in public-sector research and compute capabilities, including a National AI Research Resource and labs similar to CERN. I firmly believe AI will help the human condition, but not without a coordinated effort to ensure America’s leadership in AI,” she told us.

The flaws of ImageNet: ImageNet, which Li created, has been criticized for being biased and containing unsafe or harmful photos, which in turn lead to biases and harmful outcomes in AI systems. Li admits the database is not perfect. “It takes people to call out the imperfections of ImageNet and to call out fairness issues. This is why we need diverse voices,” she says. “It takes a village to make technology better.” 

Copyright and data: Critics have said that the current practice of hoovering up data off the internet to create data sets leads to bias and copyright and privacy violations, among other problems. Li says the AI community needs to look into this. “Our collective understanding of the role data plays is so much more sophisticated now… compared to 2009, when we were the first to call out the importance of data,” she says, adding that the AI community  needs to keep developing and learning from past mistakes. 

Tips for anyone wanting to get into AI: “No matter where you come from, what background you have, if you’re passionate about this, you have a place [in AI],” she says. She says that people from diverse backgrounds should try to ignore the way the field is portrayed by the media, which is to say very pale and male. 

Her message to young technologists? “Math is clean, but technology’s social impact is messy. Recognize that messiness, because what we are creating has both positive and negative impact.” 

One last thing before you go … This week MIT Technology Review is hosting our flagship EmTech event, where we will be looking at where AI and other cutting-edge technologies go next. 

Join us November 14-15, 2023, on the MIT Campus and online. Subscribers to The Algorithm get a special 30% discount on tickets! Enter the code THAGTH23EM here.

Deeper Learning

Noise-canceling headphones could let you pick and choose the sounds you want to hear

Future versions of the technology could let users opt back in to certain sounds they’d like to hear, such as babies crying, birds tweeting, or alarms ringing. The technology that makes it possible, called semantic hearing, allows the wearer to filter out some sounds while boosting others. 

How it works: The system, which is still in prototype, connects off-the-shelf noise-canceling headphones to a smartphone app. The microphones embedded in these headphones, which are used to cancel out noise, are repurposed to also detect the sounds in the world around the wearer.

The big picture: Researchers have long tried to solve the “cocktail party problem”—that is, to get a computer to focus on a single voice in a crowded room, as humans are able to do. This new method represents a significant step forward and demonstrates the technology’s potential. For example, it could pave the way for smarter hearing aids and earphones. Read more from Rhiannon WIlliams here.

Bits and Bytes

Are we alone in the universe?
We’re getting closer than ever before to learning how common worlds like ours actually are. New tools, including machine learning and artificial intelligence, could help scientists look past their preconceived notions of what constitutes life. (MIT Technology Review)

Amazon and Musk have joined the AI-language-model bandwagon
ChatGPT wowed the world and kick-started Big Tech’s AI arms race around a year ago. New players are still joining the race with models they hope will help them stand out. Amazon is reportedly building its own ChatGPT rival, code-named Olympus, which would be one of the biggest models ever trained. Meanwhile, Elon Musk’s new AI venture, xAI, debuted a pilot of its AI model, called Grok, which has been trained on data from X and is designed to answer questions “with a bit of wit” and “a rebellious streak,” whatever that means. 

Generative AI has made Google search weird 
AI-generated text is poisoning the internet. Nonsense generated by language models is getting scraped into online search results, making Google search the least reliable it’s ever been for clear, accessible facts. (The Atlantic)

Could this AI pin kill the smartphone? 
Silicon Valley is buzzing about an AI lapel pin created by startup Humane. The $699 device magnetically clips onto your clothes and uses a camera and sensors to record its surroundings. The idea is that people will be able to control it using their voices, and the pin will project information into people’s palms. The system uses AI models to answer questions and summarize information. The voice-controlled element makes me pause the most—I like to doomscroll discreetly in silence. (The New York Times

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.