Skip to Content
Artificial intelligence

Noise-canceling headphones could let you pick and choose the sounds you want to hear

A neural network can recognize and filter out certain sounds, changing the way we choose to experience the world around us.

a boston terrier wearing red headphones can hear his squeak toy but not a cancelled-out storm
Stephanie Arnett/MITTR | Getty, Envato

This is a subscriber-only story.

Future noise-canceling headphones could let users opt back in to certain sounds they’d like to hear, such as babies crying, birds tweeting, or alarms ringing.

The technology that makes it possible, called semantic hearing, could pave the way for smarter hearing aids and earphones, allowing the wearer to filter out some sounds while boosting others. 

The system, which is still in prototype, works by connecting off-the-shelf noise-canceling headphones to a smartphone app. The microphones embedded in these headphones, which are used to cancel out noise, are repurposed to also detect the sounds in the world around the wearer. These sounds are then played back to a neural network, which is running on the smartphone; then certain sounds are boosted or suppressed in real time, depending on the user’s preferences. It was developed by researchers from the University of Washington, who presented the research at the ACM Symposium on User Interface Software and Technology (UIST) last week.

The team trained the network on thousands of audio samples from online data sets and sounds collected from various noisy environments. Then they taught it to recognize 20 everyday sounds, such as a thunderstorm, a toilet flushing, or glass breaking.

It was tested on nine participants, who wandered around offices, parks, and streets. The researchers found that their system performed well at muffling and boosting sounds, even in situations it hadn’t been trained for. However, it struggled slightly at separating human speech from background music, especially rap music.

Mimicking human ability

Researchers have long tried to solve the “cocktail party problem”—that is, to get a computer to focus on a single voice in a crowded room, as humans are able to do. This new method represents a significant step forward and demonstrates the technology’s potential, says Marc Delcroix, a senior research scientist at NTT Communication Science Laboratories, Kyoto, who studies speech enhancement and recognition and was not involved in the project. 

“This kind of achievement is very helpful for the field,” he says. “Similar ideas have been around, especially in the field of speech separation, but they are the first to propose a complete real-time binaural target sound extraction system.”

“Noise-canceling headsets today have this capability where you can still play music even when the noise canceling is turned on,” says Shyam Gollakota, an assistant professor at the University of Washington, who worked on the project. “Instead of playing music, we are playing back the actual sounds of interest from the environment, which we extracted from our machine-learning algorithms.”

Gollakota is excited by the technology’s potential for helping people with hearing loss, as hearing aids can be of limited use in noisy environments. “It’s a unique opportunity to create the future of intelligent hearables through enhanced hearing,” he says.

The ability to be more selective about what we can and can’t hear could also benefit people who require focused listening for their job, such as health-care, military, and engineering professionals, or for factory or construction workers who want to protect their hearing while still being able to communicate.

Filtering out the world

This type of system could for the first time give us a degree of control over the sounds that surround us—for better or worse, says Mack Hagood, an associate professor of media and communication at Miami University in Ohio, and author of Hush: Media and Sonic Self-Control, who did not work on the project.

“This is the dream—I’ve seen people fantasizing about this for a long time,” he says. “We’re basically getting to tick a box whether we want to hear those sounds or not, and there could be times where this narrowing of experience is really beneficial—something we really should do that might actually help promote better communication.”

But whenever we opt for control and choice, we’re pushing aside serendipity and happy accidents, he says. “We’re deciding in advance what we do and don’t want to hear,” he adds. “And that doesn’t give us the opportunity to know whether we really would have enjoyed hearing something.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.