Skip to Content
Artificial intelligence

The ChatGPT-fueled battle for search is bigger than Microsoft or Google

A frenzy of activity from tech giants and startups alike is reshaping what people want from search—for better or worse.

""
Stephanie Arnett/MITTR | Degroote Stock/Envato

It’s a good time to be a search startup. When I spoke to Richard Socher, the CEO of You.com, last week he was buzzing: “Man, what an exciting day—looks like another record for us,” he exclaimed. “Never had this many users. It’s been a whirlwind.” You wouldn’t know that two of the biggest firms in the world had just revealed rival versions of his company’s product.

In back-to-back announcements last week, Microsoft and Google staked out their respective claims to the future of search, showing off chatbots that can respond to queries with fluid sentences rather than lists of links. Microsoft has upgraded its search engine Bing with a version of ChatGPT, the popular chatbot released by San Francisco–based OpenAI last year. Google is working with a chatbot of its own, called Bard.

These announcements gave us a glimpse of what’s next for search. But to get the full picture we need to look beyond Microsoft and Google: those giants will continue to dominate but, for anyone wanting an alternative, search is about to get crowded.

That’s because a new wave of startups have been playing with many of the same chatbot-enhanced search tools for months. You.com launched a search chatbot back in December and has been rolling out updates since. A raft of other companies, such as Perplexity, Andi, and Metaphor, are also combining chatbot apps with upgrades like image search, social features that let you save or continue search threads started by others, and the ability to search for information just seconds old. 

ChatGPT's success has created a frenzy of activity as tech giants and startups alike try to figure out how to give people what they want—whether they knew they wanted it or not.

Old guard, new ideas

Google has dominated the search market for years. “It’s been pretty steady for a long time,” says Chirag Shah, who studies search technologies at the University of Washington. “Despite lots of innovations, the needle hasn’t shifted much.”

That changed with the launch of ChatGPT in November. Suddenly, the idea of searching for things by typing in a string of disconnected words felt old-fashioned. Why not just ask for what you want?

People are hooked on this idea of combining chatbots and search, says Edo Liberty, who used to lead Amazon’s AI lab and is now CEO of Pinecone, a company that makes databases for search engines: “It’s the right kind of pairing, it’s peanut butter and jelly.”

Google has been exploring the idea of using large language models (the tech behind chatbots like ChatGPT and Bard) for some time. But when ChatGPT became a mainstream hit, Google and Microsoft made their moves.

So did others. There are now several small companies competing with the big players, says Liberty. “Just five years ago, it would be a fool’s errand,” he says. “Who in their right mind would try to storm that castle?”

Today, off-the-shelf software has made it easier than ever to build a search engine and plug it into a large language model. “You can now bite chunks off technologies that were built by thousands of engineers over a decade with just a handful of engineers in a few months,” says Liberty.

That’s been Socher’s experience. Socher left his role as chief AI scientist at Salesforce to cofound You.com in 2020. The site acts as a one-stop shop for web-search power users looking for a Google alternative. It aims to give people answers to different types of queries in a range of formats, from movie recommendations to code snippets.

Last week it introduced multimodal search—where its chatbot can choose to respond to queries using images or embedded widgets from affiliated apps rather than text—and a feature that lets people share their exchanges with the chatbot, so that others can pick up an existing thread and dive deeper into a query.

This week, You.com launched an upgrade that fields questions about live sports events, such as whether the Eagles could still win the Super Bowl with eight minutes left to play.  

Perplexity—a company set up by former researchers from OpenAI, Meta, and Quora, a website where people ask and answer each other’s questions—is taking search in a different direction. The startup, which has combined a version of OpenAI’s large language model GPT-3 with Bing, launched its search chatbot in December and says that around a million people have tried it out so far. The idea is to take that interest and build a social community around it.

The company wants to reinvent community-based repositories of information, such as Quora or Wikipedia, using a chatbot to generate the entries instead of humans. When people ask Perplexity’s chatbot questions, the Q&A sessions are saved and can be browsed by others. Users can also up- or downvote responses generated by the chatbot, and add their own queries to an ongoing thread. It’s like Reddit, but where humans ask the questions and an AI answers.

Last week, the day after Google’s (yet-to-be-released) chatbot Bard was spotted giving an incorrect answer in a rushed-out promo clip (a blooper that may have cost the company billions), Perplexity announced a new plug-in for Google’s web browser, Chrome, with a clip of its own chatbot giving the right answer to the same question.  

Angela Hoover, CEO and cofounder of Miami-based search firm Andi, set up her company a year ago after becoming frustrated at having to sift through ads and spam to find relevant links in Google. Like many people who have played around with chatbots such as ChatGPT, Hoover has a vision for search inspired by science-fiction know-it-alls like Jarvis in Iron Man or Samantha in Her.

Of course, we don’t have anything like that yet. “We don’t think Andi knows everything,” says Hoover. “Andi’s just finding information that people have put on the internet and bringing it to you in a nice, packaged-up form.”

Andi’s spin on search involves using large language models to pick the best results to summarize. Hoover says it has trained its models on everything from Pulitzer-winning articles to SEO spam to make the engine better at favoring certain results and avoiding others.

Ultimately, the battle for search won’t just be confined to the web—tools will also be needed to search through more personal sources like emails and text messages. “Compared to the rest of the data in the world, the web is tiny,” says Liberty. 

According to Liberty, there are tons of companies using chatbots for search that are not competing with Microsoft and Google. His company, Pinecone, provides software that makes it easy to combine large language models with small, custom-built search engines. Customers have used Pinecone to build bespoke search tools for user manuals, medical databases, and transcripts of favorite podcasts. “I don’t know why, but we had somebody use Pinecone to build a Q&A bot for the Bible,” he says.

Filling in the blanks 

But many people think that using chatbots for search is a terrible idea, full stop. The large language models that drive them are permeated with bias, prejudice, and misinformation. Hoover accepts this. “Large language models on their own are absolutely not enough,” she says. “They are fill-in-the-blank machines—they just make stuff up.”

Companies building chatbots for search try to get around this problem by plugging large language models into existing search engines and getting them to summarize relevant results rather than inventing sentences from scratch. Most also make their chatbots cite the web pages or documents they are summarizing, with links that users can follow if they want to verify answers or dive deeper.

But these tactics are far from foolproof. In the days since Microsoft opened up the new Bing to early users, social media has been filled with screenshots showing the chatbot going off the rails as people find ways to elicit nonsensical or offensive responses. According to Dmitri Brereton, a software engineer working on AI and search, Microsoft’s slick Bing Chat demo was also riddled with errors.

Hoover suspects that Microsoft’s and Google’s chatbots may produce incorrect responses because they stitch together snippets from search results, which may themselves be inaccurate. “It’s a bad approach,” she says. “It is easy to demo because it looks impressive, but it produces dodgy answers.” (Microsoft and Google did not respond to requests for comment.)

Hoover says that Andi avoids simply repeating text from search results. “It doesn’t make things up like other chatbots,” she says. People can decide for themselves whether or not that’s true. After collecting feedback from its users for the past year, the company’s chatbot will now sometimes admit when it’s not confident about an answer. “It’ll say, ‘I’m not sure, but according to Wikipedia …,’” says Hoover.

Either way, this new era of search probably won’t ditch lists of links entirely. “When I think about search five years from now, we’ll still have the ability to look through results,” says Hoover. “I think that’s an important part of the web.”

But as chatbots get more convincing, will we be less inclined to check up on their answers? “What’s noteworthy isn’t that large language models generate false information, but how good they are at turning off people’s critical reasoning abilities,” says Mike Tung, CEO of Diffbot, a company that builds software to pull data from the web.  

The University of Washington’s Shah shares that concern. In Microsoft’s demo for Bing Chat, the company hammered home the message that using chatbots for search can save time. But Shah points out that a little-known project Microsoft has been working on for years, called Search Coach, is designed to teach people to stop and think.

Billed as “a search engine with training wheels,” Search Coach helps people, especially students and educators, learn how to write effective search queries and identify reliable resources. Instead of saving time, Search Coach encourages people to slow down. “Compare that to ChatGPT,” says Shah.

Companies like Andi, Perplexity, and You.com are happy to admit they’re still figuring out what search could be. The truth is that it can be many things.

“You don't want to fight against convenience, that's a losing battle in consumer tech,” says Socher. “But there’s some pretty fundamental questions about the entire state of the internet at play here.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Responsible technology use in the AI age

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.