How AI-generated text is poisoning the internet
Plus: A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
This has been a wild year for AI. If you’ve spent much time online, you’ve probably bumped into images generated by AI systems like DALL-E 2 or Stable Diffusion, or jokes, essays, or other text written by ChatGPT, the latest incarnation of OpenAI’s large language model GPT-3.
Sometimes it’s obvious when a picture or a piece of text has been created by an AI. But increasingly, the output these models generate can easily fool us into thinking it was made by a human. And large language models in particular are confident bullshitters: they create text that sounds correct but in fact may be full of falsehoods.
While that doesn’t matter if it’s just a bit of fun, it can have serious consequences if AI models are used to offer unfiltered health advice or provide other forms of important information. AI systems could also make it stupidly easy to produce reams of misinformation, abuse, and spam, distorting the information we consume and even our sense of reality. It could be particularly worrying around elections, for example.
The proliferation of these easily accessible large language models raises an important question: How will we know whether what we read online is written by a human or a machine? I’ve just published a story looking into the tools we currently have to spot AI-generated text. Spoiler alert: Today’s detection tool kit is woefully inadequate against ChatGPT.
But there is a more serious long-term implication. We may be witnessing, in real time, the birth of a snowball of bullshit.
Large language models are trained on data sets that are built by scraping the internet for text, including all the toxic, silly, false, malicious things humans have written online. The finished AI models regurgitate these falsehoods as fact, and their output is spread everywhere online. Tech companies scrape the internet again, scooping up AI-written text that they use to train bigger, more convincing models, which humans can use to generate even more nonsense before it is scraped again and again, ad nauseam.
This problem—AI feeding on itself and producing increasingly polluted output—extends to images. “The internet is now forever contaminated with images made by AI,” Mike Cook, an AI researcher at King’s College London, told my colleague Will Douglas Heaven in his new piece on the future of generative AI models.
“The images that we made in 2022 will be a part of any model that is made from now on.”
In the future, it’s going to get trickier and trickier to find good-quality, guaranteed AI-free training data, says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning. It’s not going to be good enough to just blindly hoover text up from the internet anymore, if we want to keep future AI models from having biases and falsehoods embedded to the nth degree.
“It’s really important to consider whether we need to be training on the entirety of the internet or whether there’s ways we can just filter the things that are high quality and are going to give us the kind of language model we want,” says Ippolito.
Building tools for detecting AI-generated text will become crucial when people inevitably try to submit AI-written scientific papers or academic articles, or use AI to create fake news or misinformation.
Technical tools can help, but humans also need to get savvier.
Ippolito says there are a few telltale signs of AI-generated text. Humans are messy writers. Our text is full of typos and slang, and looking out for these sorts of mistakes and subtle nuances is a good way to identify text written by a human. In contrast, large language models work by predicting the next word in a sentence, and they are more likely to use common words like “the,” “it,” or “is” instead of wonky, rare words. And while they almost never misspell words, they do get things wrong. Ippolito says people should look out for subtle inconsistencies or factual errors in texts that are presented as fact, for example.
The good news:her research shows that with practice, humans can train ourselves to better spot AI-generated text. Maybe there is hope for us all yet.
Deeper Learning
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
This story made my skin crawl. Earlier this year my colleague Eileen Guo got hold of 15 screenshots of private photos taken by a robot vacuum, including images of someone sitting on the toilet, posted to closed social media groups.
Who is watching? iRobot, the developer of the Roomba robot vacuum, says that the images did not come from the homes of customers but “paid collectors and employees” who signed written agreements acknowledging that they were sending data streams, including video, back to the company for training purposes. But it’s not clear whether these people knew that humans, in particular, would be viewing these images in order to train the AI.
Why this matters: The story illustrates the growing practice of sharing potentially sensitive data to train algorithms, as well as the surprising, globe-spanning journey that a single image can take—in this case, from homes in North America, Europe, and Asia to the servers of Massachusetts-based iRobot, from there to San Francisco–based Scale AI, and finally to Scale’s contracted data workers around the world. Together, the images reveal a whole data supply chain—and new points where personal information could leak out—that few consumers are even aware of. Read the story here.
Bits and Bytes
OpenAI founder Sam Altman tells us what he learned from DALL-E 2
Altman tells Will Douglas Heaven why he thinks DALLE-2 was such a big hit, what lessons he learned from its success, and what models like it mean for society. (MIT Technology Review)
Artists can now opt out of the next version of Stable Diffusion
The decision follows a heated public debate between artists and tech companies over how text-to-image AI models should be trained. Since the launch of Stable Diffusion, artists have been up in arms, arguing that the model rips them off by including many of their copyrighted works without any payment or attribution. (MIT Technology Review)
China has banned lots of types of deepfakes
The Chinese Cyberspace Administration has banned deepfakes that are created without their subject’s permission and that go against socialist values or disseminate “Illegal and harmful information.” (The Register)
What it’s like to be a chatbot’s human backup
As a student, writer Laura Preston had an unusual job: stepping in when a real estate AI chatbot called Brenda went off-script. The goal was that customers would not notice. The story shows just how dumb the AI of today can be in real-life situations, and how much human work goes into maintaining the illusion of intelligent machines. (The Guardian)
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Responsible technology use in the AI age
AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.