Google’s recent introduction of the Bard extension, powered by its large language model, has raised concerns as it seems to have some issues, including fabricating emails.
While Google’s integration of its generative AI into its established product lineup is a logical step, the company may have rushed the process.
According to New York Times columnist Kevin Roose, Bard, in its current state, isn’t quite the helpful inbox assistant that Google envisions. During his testing, Roose found that the AI created email conversations that never occurred.
The problematic behavior started when Roose asked Bard to analyze his Gmail and identify his major psychological concerns. While an unusual request, it’s straightforward. Bard quickly responded, asserting that Roose tends to “worry about the future” and cited an email, supposedly from Roose, expressing stress about work and fear of failure. However, Roose never sent that email.
Bard had misinterpreted a quote from a newsletter that Roose had received and used it to craft an utterly fictitious email, claiming that Roose had sent it.
This wasn’t an isolated incident. Bard continued fabricating emails, including one where Roose allegedly complained about not being “cut out to be a successful investor.” The AI also made numerous errors in airline information and even invented a non-existent train.
In response to these concerns, Jack Krawczyk, the director of Bard at Google, acknowledged that Bard Extensions is still experimental and in its initial stage.
Despite this disclaimer, the extension appears to have significant shortcomings, raising questions about Google’s decision to release a product with such issues. Additionally, there are concerns regarding the privacy implications of AI analyzing personal emails.
Overall, it seems that Google’s eagerness to maintain its lead in the AI industry may have led to hasty decisions that could potentially result in significant problems.