AI text generators have become so proficient in writing like humans that even OpenAI, the makers of ChatGPT, arguably the most advanced text generator, can no longer differentiate between a piece generated by AI and one that humans have written.
As meme-worthy as this development is, this does not bode well for AI studios and their plans to differentiate between AI-generated content.
OpenAI shuts down its AI text detector.
OpenAI, the company behind ChatGPT, has quietly discontinued a tool designed to detect AI-generated text. The decision was made because the tool’s accuracy did not meet their standards, as mentioned in a blog update on July 20, 2023. OpenAI acknowledged the need for better techniques to trace text’s origin and is now improving the tool.
Initially, the AI detection tool was launched to assist educators in identifying academic cheating, as detecting AI-written text had become a significant concern. However, OpenAI was aware of the limitations of such tools and cautioned against relying solely on them for decision-making.
Students might use AI tools, such as stress or seeking shortcuts, because of complex issues that require more than just technological solutions.
AI can’t be trusted with anything.
Even Turnitin, a popular plagiarism-checker software used in schools, faced accuracy issues with its new AI-written content detection tool. False accusations against students using AI to cheat were a concern after the software incorrectly identified over half of the text in a test conducted by the Washington Post.
Experts had previously raised concerns about ChatGPT and similar generative AI technologies in January, fearing that they could exacerbate existing problems in education, such as an overemphasis on tests and formulaic essays.
The current state of AI technology allows humans to imitate AI models’ writing styles; conversely, AI models can replicate human-like writing if given appropriate prompts. Exploiting this capability, individuals can easily evade AI detectors by instructing ChatGPT to write in the style of a known author. Commercial AI detectors have recently increased over the last six months despite this challenge.
Prominent AI writer and futurist Daniel Jeffries expressed his skepticism regarding the effectiveness of AI detection tools, particularly highlighting OpenAI’s struggles with their device. “If OpenAI can’t get its AI detection tool to work, nobody else can either,” he tweeted. “I’ve said before that AI detection tools are snake oil sold to people, and this is just further proof that they are. Don’t trust them. They’re nonsense.”
What happens to the plan of watermarking AI-generated texts? Ongoing research explores the possibility of watermarking AI-generated text by deliberately manipulating the frequency of words in the output. However, the study mentioned indicates that such text watermarking methods can be easily bypassed by AI models capable of paraphrasing the content.
Given the current state of AI writing technology, it appears that AI-generated text is here to stay. As we move forward, AI-augmented text has the potential to seamlessly blend with the great works of human history, making it difficult to detect if used skillfully.
In light of this development, it may be essential to shift our focus from solely examining how text is composed to ensuring that it accurately reflects the intended message of the human author. Effective communication should be the primary goal, regardless of whether the content is generated by AI or written by a human. As AI plays a significant role in content creation, emphasizing meaningful expression and representation of ideas becomes increasingly essential.