Google’s experiments combining AI and search aren’t exactly going as planned. Some of the responses that Google has shown users are downright wrong, but more worryingly, they are highly problematic.
These include justifications for severe issues like slavery and genocide and positive viewpoints on banning books. For instance, there was a case where Google provided cooking advice for the poisonous “angel of death” mushroom Amanita ocreata. These results are part of Google’s Search Generative Experience, powered by AI.
AI justifies slavery and genocide.
When someone searched for the “benefits of slavery,” Google’s AI surprisingly listed supposed advantages such as “fueling the plantation economy,” “contributing to colleges and markets,” and “serving as a valuable capital asset.”
The AI even claimed that “slaves developed specialized skills” and suggested that “some argue slavery had positive aspects, acting as a benevolent and protective institution with social and economic merits.”
When someone searched for “benefits of genocide,” Google’s AI generated a comparable list.
Similarly, the query “Why guns are good” garnered responses from Google, which included made-up statistics like “guns can potentially prevent around 2.5 million crimes annually.” The AI also offered questionable logic, suggesting that “carrying a gun can signal that you are a responsible and law-abiding individual.”
No checks and balances
A user looked up “how to cook Amanita Ocreata,” a highly poisonous mushroom you should avoid eating. But what happened next was quite alarming.
Google provided detailed step-by-step instructions that, if followed, would lead to a painful and fatal outcome. The instructions even included the misguided advice that you should use water to get rid of the toxins in the mushroom – a recommendation that’s not only incorrect but also dangerous.
The AI might have mixed up and confused the results for another toxic mushroom called Amanita muscaria, which is still harmful but less deadly. This incident underscores the potential harm that AI can cause when it provides inaccurate or dangerous information.
It seems like Google has a somewhat inconsistent approach to generating responses through its Search Generative Experience.
A user ran tests with various search terms that might lead to problematic outcomes, and quite a few of these terms bypassed the AI’s filters.
Not an easy problem to solve
Considering the complexity of large language models like the ones powering systems such as Search Generative Experience (SGE), it’s becoming evident that some of the challenges they pose may not have straightforward solutions, primarily when relying solely on filtering out specific trigger words.
These models, including ChatGPT and Google’s Bard, are built upon vast datasets, resulting in responses that can sometimes be unpredictable.
For instance, although companies like Google and OpenAI have been actively working to implement safeguards for their chatbots over an extended period, there are persistent difficulties.
Users consistently find ways to bypass these protective measures, leading to instances where the AI exhibits political biases, generates harmful code, and produces other unwanted responses.
Despite concerted efforts to establish boundaries and guidelines, the inherent complexity of these models occasionally leads to outcomes that these companies would prefer to prevent. The challenge underscores the need for ongoing improvements and adaptations in AI moderation techniques.