Social media has been flooded with bizarre and dangerous advice that appears to have been made by Google's new AI overview feature. The company continues to defend the 'high quality' search tool.
Google has updated its search engine with an artificial intelligence (AI) tool, but the new feature has reportedly told users to eat rocks, add glue to their pizzas, and clean their washing machines with chlorine gas, according to various social media and news reports.
In a particularly egregious example, the AI appeared to suggest jumping off the Golden Gate Bridge when a user searched "I'm feeling depressed."
The experimental "AI Overviews" tool scours the web to summarize search results using the Gemini AI model. The feature has been rolled out to some users in the U.S. ahead of a worldwide release planned for later this year, Google announced May 14 at its I/O developer conference.
However, the tool has already caused widespread dismay across social media, with users claiming that on some occasions AI Overviews generated summaries using articles from the satirical website The Onion and comedic Reddit posts as its sources.
"You can also add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness," AI Overviews said in response to one query about pizza, according to a screenshot posted on X (formerly Twitter). Tracing the answer back, it appears to be based on a decade-old joke comment made on Reddit.
Other erroneous claims include that Barack Obama is a Muslim, Founding Father John Adams graduated from the University of Wisconsin 21 times, a dog played in the NBA, NHL, and NFL, and that users should eat a rock a day to aid their digestion.
The Extent of the Issue
Live Science could not independently verify the posts. In response to questions about how widespread the erroneous results were, Google representatives said in a statement that the examples seen were "generally very uncommon queries, and aren't representative of most people's experiences."
"The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web," the statement said. "We conducted extensive testing before launching this new experience to ensure AI Overviews meet our high bar for quality. Where there have been violations of our policies, we've taken action — and we're also using these isolated examples as we continue to refine our systems overall."
AI Hallucinations: A Recurring Problem
This is far from the first time that generative AI models have been spotted making things up — a phenomenon known as "hallucinations." In one notable example, ChatGPT fabricated a sexual harassment scandal and named a real law professor as the perpetrator, citing fictitious newspaper reports as evidence.
Moving Forward with Caution
The incidents underscore the importance of critical thinking and fact-checking in the age of AI. While AI technologies hold great potential, they also come with significant risks, particularly when they propagate false or dangerous information.
For now, Google is working to address these issues and refine its AI systems to prevent future errors. However, users should remain vigilant and verify information from multiple credible sources before taking any action based on AI-generated advice.
__________________________________________________________________________
Vertical Bar Media
For more information on how to safely implement AI tools in your business or personal projects, visit Vertical Bar Media's Digital Marketing Services.
For more information on how to safely implement AI tools in your business or personal projects, visit Vertical Bar Media's Digital Marketing Services.
Social Media Hashtags: #GoogleAI #AIMisinformation #TechNews #AI
Comments
Post a Comment