Google has recently deployed a new experimental search feature to hundreds of millions of users across Chrome, Firefox, and the Google app browser. Known as “AI Overviews,” this feature utilizes generative AI to summarize search results, potentially saving users from needing to click through multiple links. For example, when asked how to keep bananas fresh, the AI provides useful tips such as storing them in a cool, dark place, and away from ethylene-producing fruits like apples.
It's been quite a week for Google's new AI search results.
Here's a thread with the most wild answers: pic.twitter.com/QzYbTIOx4L
— Angry Tom (@AngryTomtweets) May 26, 2024
However, the tool has its drawbacks, particularly when handling unconventional questions. In some cases, the AI’s responses can be not only incorrect but dangerously misleading. Google is actively working to address these issues, but the process has been fraught with challenges, turning into a public relations nightmare and an ongoing struggle to maintain the integrity of its search results.
“AI Overviews” can offer fun facts correctly, like explaining that “Whack-A-Mole” is a game invented in Japan in 1975. Yet, it also produces bizarre and false statements, such as claims that astronauts have encountered cats on the moon, or health advice like eating rocks for their supposed mineral content, and even suggesting glue as a pizza topping.
#Gravitas | Google's new artificial intelligence (AI) search feature is facing criticism for providing erratic, inaccurate answers. Its experimental "AI Overviews" tool has told some users searching for how to make cheese stick to pizza better that they could use "non-toxic… pic.twitter.com/65Il21hF2x
— WION (@WIONews) May 27, 2024
The root of these problems lies in the nature of generative AI. These systems generate responses based on patterns and popularity in the data they have been trained on rather than verifiable truths. For instance, the idea of eating rocks might be drawn from a popular satirical piece rather than from factual content. Furthermore, these AI models do not inherently align with human values since they are trained on vast and varied internet data, which can include biases and inaccuracies.
This situation brings up significant concerns about the future of search technology. Google’s rush to implement such AI features seems to be a response to competition from entities like OpenAI and Microsoft, pushing the tech giant to innovate more aggressively than usual. This shift in strategy marks a departure from Google’s traditionally cautious approach as expressed by CEO Sundar Pichai in 2023, emphasizing responsible AI practices and the importance of not rushing products to market.
The stakes are high not only for Google but for the broader landscape of digital information. The move towards AI-driven summaries could potentially disrupt Google’s business model, which relies heavily on user engagement through link clicks. More critically, the reliance on AI for information retrieval could erode public trust in Google’s ability to deliver reliable answers and could have broader societal impacts by blurring the line between truth and fiction.
SEOs reacting to Google's 'AI overview'. pic.twitter.com/Qd8TDFHZTq
— Suganthan Mohanadasan (@Suganthanmn) May 20, 2024
Moreover, the ongoing development and deployment of AI could be creating a cycle where AI is trained on its own flawed outputs, amplifying biases and errors—a concept akin to “breathing its own exhaust.” This phenomenon could degrade the quality of information online over time.
As the AI landscape continues to evolve rapidly with daily investments exceeding hundreds of millions of dollars globally, there’s a growing recognition of the need for regulatory frameworks to ensure that AI technologies are deployed responsibly. This situation mirrors the regulatory environments of other industries, such as pharmaceuticals and automotive, where safety and reliability are paramount. The challenge now is for tech companies to balance innovation with ethical considerations to prevent potential harm and misinformation. Knowing the political machine that Google is for the Left, more and more people are not looking at Google as a fair system that will always do no harm.
Major Points
- Google has launched “AI Overviews,” a new search feature on Chrome, Firefox, and its app browser, using generative AI to provide summaries of search results, aiming to save users from clicking multiple links.
- The feature struggles with non-standard queries, producing incorrect or dangerous responses, such as recommending eating rocks for their mineral content or suggesting astronauts have encountered cats on the moon.
- These inaccuracies stem from the AI’s reliance on patterns in popular data rather than factual correctness, without an inherent alignment to human values or truths.
- The issues with AI Overviews have led to a PR crisis for Google, forcing the company to address each error individually and reconsider its approach to releasing new technologies.
- The situation underscores the need for careful implementation and regulation of AI technologies, as they can significantly impact public trust and the quality of information, similar to regulatory standards in other industries.
Conner T – Reprinted with permission of Whatfinger News