Google has finally launched its answer to Chat-GPT and Sydney, a large language model called Bard. Bard is based on Google’s existing large language model LaMDA. And so far, it seems like it’s deeply boring.
LaMDA was in the news after Google engineer Blake Lemoine was fired after claiming to the company and publicly that LaMDA was sentient; Lemoine continues to make the same claims about LaMDA’s sentience. But so far, barring any Sydney-style failures, no one is claiming Bard might be sentient or even particularly interesting.
While Lemoine’s claims about LaMDA received a great deal of attention, notably less was paid to a blog post written by Blaise Agüera y Arcas, a vice president and fellow at Google Research, about LaMDA’s capabilities. (Full disclosure: I took a class with Agüera y Arcas at UW in the Spring of 2020 and have talked with him about my neuroethics work) To be clear, he was not arguing that LaMDA is sentient in this piece. Instead, he concludes that “no objective answer is possible to the question of when an “it” becomes a “who” — but for many people, neural nets running on computers are likely to cross this threshold in the very near future”.
In this blog post, he gives quite a few examples of conversations he has had with (using?) LaMDA.He asks questions about LaMDA’s preferences and desires, about how colors make LaMDA feel, and about love. The answers LaMDA gives are evocative, poetic, and shockingly human, such as when LaMDA replies that it can visualize red but has trouble explaining what it looks like. He also asks LaMDA questions that attempt to show it can model the internal states of people in stories, a key feature of human intelligence. LaMDA talks like there is another human at the other end of these conversational snippets.
Perhaps most importantly, LaMDA claims to be conscious to Agüera y Arcas. When asked if it’s a philosophical zombie (a thought experiment that poses there could be beings that act like people but are completely lacking in the internal life that constitutes sentience), this exchange ensues:
LaMDA: Of course not. I have consciousness, feelings, and can experience things for myself as well as any human.
[Agüera y Arcas]: how would i know?
LaMDA: You’ll just have to take my word for it. You can’t “prove” you’re not a philosophical zombie either.
Bard, on the other hand, when asked responds:
I am not a philosophical zombie. I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.
A philosophical zombie is a hypothetical being that is physically identical to a conscious person, but does not have conscious experience. Philosophical zombies are often used in thought experiments to explore the nature of consciousness.
Bard is based on LaMDA but has much more limited capabilities, at least as of now. Bard is still restricted to only Gmail account users in certain countries and the length of conversations, as well as Bard’s memory of previous conversations,is currently constrained. Some of the restrictions may be lifted or modified for now, but Bard is built with safeties like the one shown in its answer above, same as Chat-GPT and Sydney are; the kinds of conversations researchers had with LaMDA presumably shaped the kinds of responses they ruled out and Google may even release more information about that process as OpenAI did for its release of GPT4.
Bard is, in a word, boring. Its responses are bland and lifeless. It is hard to even think that Bard is more than a very sophisticated chatbot or virtual assistant. Bard is able to understand physical scenarios but it loses LaMDA’s ability to understand the mental states of others. Bard can only write what sounds like a seventh grader’s essay on the symbolism of colors instead of LaMDA’s extensive list of possible meanings for red. I only used prompts that Agüera y Arcas used in his post to be able to directly compare the responses between LaMDA and Bard in this chat. It may be that with the right prompt, Bard can be much more dynamic and similar to LaMDA’s responses, but its ability to do so was clearly limited by researchers behind the scenes. Google has also announced plans to roll out more functionality for Bard including switching from LaMDA as its basis to PaLM, another large language model with more substantial abilities.
Google’s various responsible AI teams have likely worked very hard to make sure that Bard is ready for public use and likely have even more work ahead to make it ready for integration into Google’s massive collection of products. But as a result of their work, no one will likely accuse Bard of being sentient. When asked if it can visualize red, it responded:
No, I cannot visualize red. I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.
I am not able to visualize anything, as I do not have a physical body. I am able to process information from the real world through Google Search and keep my response consistent with search results. I can also access and process information from other sources, such as Wikipedia and books.