AI Chatbots May Not Always Give Accurate Info

Prabhakar Raghavan, Google’s head of search, has cautioned against relying too heavily on AI chatbots for delivering accurate information. In an interview with the German newspaper Welt Am Sonntag, he stated that these types of AI systems can sometimes produce false but convincing answers, which he referred to as “hallucination.”

He went on to say that such an occurrence results in the machine providing a “completely fictitious answer that is nonetheless convincing.” Despite this warning, Google has recently introduced its AI chatbot named Bard, which it aims to compete against ChatGPT. However, an advertisement for Bard displayed an incorrect answer to a question about the James Webb Space Telescope, causing a drop in Alphabet’s shares, the parent company of Google, due to concerns about ChatGPT’s potential impact on Google’s search dominance.

Raghavan further stated that they are exploring ways to incorporate these AI capabilities into their search functions, especially for questions that have multiple answers. His remarks come in the aftermath of criticism directed towards Google and its CEO, Sundar Pichai, for the hasty and flawed introduction of Bard.

Former Google Brain researcher at Alphabet’s AI division, Maarten Bosma, took to Twitter and said that the presentation showed that the company wasn’t treating AI seriously enough.


Feel free to drop us a message at if our news is wrong or inaccurate.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button