Challenges and Risks of Google’s AI-Driven Search Summaries

AI : Challenges and Risks of Google's AI-Driven Search Summaries

On a plane heading to Tokyo, I wanted to find out how to get from Narita Airport to the city by train. The airplane Wi-Fi identified me as a US citizen or at least someone Google thought should have access to the AI Overviews introduced earlier in the year. When I searched for “airliner train” in Tokyo, the top result was Google’s AI Overview, created by its Gemini system to intelligently summarize web results.

The overview mentioned options like the “Skyliner,” “Narita Express,” and “Jodan Skyflyer Ultra Express.” While I knew about the Skyliner and Narita Express, the “Jodan Skyflyer Ultra Express” puzzled me. I googled it and found it mentioned only on a blog by a Japan enthusiast named Todd Fong. His post described a fictional flying train, part of his “Illusions of Japan” series, mixing truth and fiction.

Google’s AI had mistakenly included a fictional train in its overview, showing it struggles to discern fiction from fact. This is concerning as AI Overviews often give high trust scores to lesser-known websites. The algorithm combined two real train options with a fictional one, highlighting flaws in its design.

Google has been integrating more content directly into its search results to keep users on its site longer, reducing the need to click through to external websites. This approach has drawn criticism from publishers. AI Overviews are part of this strategy, summarizing content directly in search results and minimizing the need for users to visit external sites.

Generative AI, like Google’s system, works with probabilities, predicting the next word in a sequence. This can lead to “hallucinations,” where the AI generates plausible but incorrect information. Google’s AI uses Retrieval Augmented Generation (RAG) to access current search data, providing sources with links to the web. However, these links are often not clicked, or they lead to unreliable content.

There has been excitement over Google’s deal with Reddit for training data, though it wasn’t exclusive. Reddit’s moderated content was expected to improve AI outputs, but AI Overviews have struggled to distinguish satire from fact. Google plans to enhance its AI search capabilities, promising more complex question handling in the future.

The issue of AI hallucinations isn’t unique to Google; it’s a fundamental flaw in generative AI systems. Even years after the launch of models like ChatGPT, the problem persists. Experts can identify errors, but it often requires significant research effort. Users and content creators are seeking ways to avoid AI Overviews, as errors can be misleading.

There’s no easy solution to this problem. Perhaps high-risk features should not be released widely without thorough testing. The EU’s AI Act emphasizes risk assessment, but even companies like Apple have faced criticism for taking too long to roll out AI features. Despite labeling features as beta or experimental, they are still released, raising the risk of misinformation being taken as fact.

People might turn to platforms like YouTube and TikTok for search, but these too are becoming filled with AI-generated content. As AI continues to evolve, we must be cautious about its use and the potential for misinformation.

Exit mobile version