In a recent paper, Google researchers raised alarms about the impact of generative AI on the internet, highlighting the irony that Google has been vigorously promoting this technology to its vast user base.
The study, which has yet to undergo peer review and was highlighted by 404 Media, reveals that many generative AI users exploit the technology to “blur the lines between authenticity and deception.”
This includes posting fake or doctored AI-generated content on the internet, such as images and videos.
The researchers analyzed existing research on generative AI and reviewed around 200 news articles documenting its misuse. Their findings indicate that manipulating human likeness and falsifying evidence are among the most common tactics used in real-world scenarios. These activities often aim to influence public opinion, facilitate scams or fraudulent activities, or generate profit.
A key concern is that generative AI systems have become increasingly advanced and accessible, requiring minimal technical expertise. The researchers found that this is distorting people’s “collective understanding of socio-political reality or scientific consensus.”
One notable omission from the paper is that Google’s missteps with generative AI were not mentioned. As one of the largest companies globally, Google has occasionally made significant errors in deploying this technology.
The study suggests that the widespread misuse of generative AI indicates the technology is performing its intended function too well. People use generative AI to produce large amounts of fake content, effectively inundating the internet with AI-generated misinformation.
This situation is exacerbated by Google, which has not only permitted but sometimes been the source of this fake content, including false images and information. The proliferation of such content challenges people’s ability to distinguish between real and fake information.
The researchers warn that the mass production of low-quality, spam-like, and malicious synthetic content increases public skepticism toward digital information. It also overloads users with the need to verify the authenticity of what they encounter online.
More disturbingly, the researchers point out instances where high-profile individuals have been able to dismiss unfavorable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways. This tactic undermines accountability and complicates the verification process.
As companies like Google continue to integrate AI into their products, the prevalence of these issues is expected to rise. The research underscores the need for vigilance and robust measures to address the challenges posed by generative AI in maintaining the integrity of online information.