Will Google’s Bard Be a Destination Chatbot?
Why it could succeed or fail—and what that means for the future of AI chatbots
Apparently, no one knows how to write a call to action at Google.
On Wednesday, the company launched its newest language model, Gemini, which purportedly beats GPT-4 on some benchmarks. Awesome! Except there was no easy link to try it. Most companies put a giant TRY NOW button at the top of their product announcements, but not Google.
Instead, I had to swim in the corporate blog post version of the Great Pacific Garbage Patch trying to find a goddamn way to actually use the thing. It was easier to cancel my gym membership than to get access to its chatbot.
After consulting a douser (err, my friends on X), I found a sentence in the post that explained that while Gemini is available in Bard—Google’s ChatGPT competitor—immediately, its most performant version—the one that beats GPT-4—won’t be out until next year. Daddy Googs giveth, and Daddy Googs taketh away.
Without the option of being able to try it, I read all the content and watched the videos so you don’t have to (only because that is literally my job)—and lo and behold, hallelujah, there’s some interesting stuff.
Gemini can do scientific research. It was able to scan through 200,000 papers published in the last few years, figure out which ones contained relevant new science, extract the findings, and construct a review paper with what it learned. It’s also very good at coding, and it has a neat UI that builds bespoke software experiences as you chat with it. It can do these things better than previous versions because it has an enhanced ability to plan tasks in advance.
We’ll see how it goes when the most capable version is released, but Gemini raises an interesting question:
Does Bard have a future as a destination chatbot? It almost certainly has a future in the context of other Google products. In other words, as long as I’m using Gmail or typing searches in Chrome, I’ll use Bard because it will be built in.
But will I, or anyone else, go to Bard by choice? Right now, I do that with ChatGPT and Claude. Do I have room for one more? Ten? How many chatbots will become destinations? Let’s unpack.
Why Bard could succeed
There are two obvious reasons that Bard could become a destination. First, it gets to leverage Google’s massive distribution by being built into Chrome, Gmail, Docs, and more. As a result, it will be the first AI experience for many millions of people—and it will reap the rewards of the habit it creates.
Second, it has access to all of the data you and I and everyone else has saved in Google. I’ve written a lot about the idea that access to knowledge is the most important bottleneck for the quality of LLM results, and Google has a massive advantage. The fact that Bard can already reference all of my emails and documents is going to have a big impact on how good it is at saying smart things.
But there’s a third, less obvious reason that Bard might be able to become a destination. Talking to a chatbot is a lot like talking to a friend. And—as shown by the 197 unread text messages on my phone right now (sorry if that’s you)—I chat with a lot of different friends at once. Our brains are wired to remember many different people, and to know when and why we might want to talk to them. I have a small circle of people who I text all day, every day. A slightly larger circle that I text a few times a week or month to make plans or share memes. And I have a much larger circle of people I text in specific circumstances. For example, there’s a friend I text when a new steak video drops on YouTube—but we don’t talk about anything else. (Male friendships are weird.)
The Dunbar number is, famously, the number of relationships that a person can maintain at once—usually about 150 of varying relationship strengths.
If chatbots are human enough to get filed in the same part of our brain that deals with relationships, they might have a similar dynamic with us. This might leave room for Bard, even in a world where ChatGPT continues to charge ahead.
But there are forces going against it.
Why Bard could fail
Google can’t ship. Most of the research that OpenAI is built on happened at Google—and yet Google fumbled the bag. As I’ve written before, aside from knowledge orchestration, the other thing holding back result quality from LLMs is risk tolerance. Organizations that are willing to take more risks will be able to build better AI products.
Google is not in a position to take a lot of risk because it is a giant company, and every move it makes affects hundreds of millions of people. Big companies always mess up big technology transitions—think about Blackberry in the iPhone era—so it was smart for Microsoft to partner with OpenAI. Microsoft owns all of the IP that OpenAI produces, so it gets the benefit of the progress the startup can make; at the same time, OpenAI is separate enough so that it doesn’t get choked by Microsoft’s sclerotic bureaucracy.
Another force going against Bard is that there are likely to be winner-take-all dynamics in chat.
To completely contradict a point I made above, chatbots are not human. Therefore, we may treat them like other software products: we will only remember the ones that we use multiple times a day. In a world where this is true, usage will probably trend toward most people using the best chatbot available—and right now that’s ChatGPT.
This dynamic creates a compounding position for ChatGPT: the more users it has, the more data it has that it can use to make the product better, which will attract more users. There are also secondary positive feedback loops in the form of historical chats. One example: ChatGPT is likely to integrate a memory that can recall things you’ve said to it in the past—improving result quality and preventing people from switching.
In this way, the chatbot space might look a lot like search. There will be one dominant player (like Google). There will also be lots of little chatbots integrated into existing experiences that people already use (like search boxes on individual websites). But the opportunity for the latter will be tiny compared to the former.
So which one is right?
The Dunbar number for chatbots is high—for destination chatbots, it’s 1
My gut is that the Dunbar number for destination chatbots is one. In other words, we’re likely to find ourselves in a world with a single dominant destination chatbot. I think that’s going to be ChatGPT. It’s currently the best one out there and has a great data flywheel. OpenAI has the highest density of talent, and with Sam Altman back, it’s newly focused and invigorated.
But I think there will be a demand from users to interact with many chatbots within the single destination that they visit. I think ChatGPT will iterate custom GPTs to the point where you’ll have different ones that you might interact with on a given day—or in a given situation. Our brains are wired for this—remember, we can maintain up to 150 social relationships at a time—so the possibilities are pretty wild.
You can already see the beginnings of this with chat platforms like Poe, which allows you to interact with GPT-4, Claude, and user-customized versions of each within the same interface. Character.AI has landed on a similar concept for allowing users to interact with chatbots that have different personalities and memories.
ChatGPT might at some point resemble something like Slack, but populated with one human and many different bots. Instead of chatting with one bot at a time like you do now, you may have a single channel where you can chat with many different bots at once—all offering their perspective and expertise when appropriate.
The Dunbar number for chatbots is high; the Dunbar number for destination chatbots is one. Game on.