OpenAI recently took action against Iranian state-backed election hackers by deactivating several ChatGPT accounts that were being used as part of an Iranian disinformation campaign in a bid to derail the upcoming US elections.
These accounts have used AI tools to create and distribute fake news articles and social media comments. The operation was the first instance identified by OpenAI, and its focus was on the US elections, raising concerns about the potential misuse of AI in disrupting the 2024 election process.
The urgency lies in the growing interest of nation-state adversaries in meddling with the upcoming US elections. Experts worry that tools like ChatGPT could significantly enhance the speed and efficiency of crafting disinformation, making it easier to spread false narratives.
OpenAI’s investigation revealed that the disinformation activities were linked to a group known as Storm-2035, which has a history of creating fake news websites and disseminating them on social media to influence public opinion. The accounts involved were not only generating content related to the US presidential elections but also covering other sensitive topics, such as the Israel-Hamas conflict and Israel’s participation in the Olympic Games.
The broader context of this disinformation campaign ties back to recent findings by Microsoft, which had previously identified the same Iranian group in connection with spear-phishing attacks targeting US presidential campaigns. OpenAI discovered that the group had been operating a new set of social media accounts specifically designed to spread this misleading content.
As part of its investigation, OpenAI identified and shut down a dozen accounts on X (formerly known as Twitter) and one Instagram account. These accounts were part of a broader effort to circulate fake news and influence public discourse. In response, Meta, the parent company of Instagram, also deactivated the account in question, noting its connection to a previous Iranian campaign that targeted users in Scotland. X has not yet commented on the situation, but OpenAI has confirmed that the social media accounts are no longer active.
In addition to social media, the disinformation actors created five websites that posed as legitimate news outlets, representing both progressive and conservative viewpoints. These websites were used to publish AI-generated articles, one of which speculated on a potential running-mate choice for Vice President Kamala Harris, falsely suggesting a calculated move for unity.
Despite the sophisticated efforts of these disinformation campaigns, OpenAI found that most social media accounts sharing AI-generated content needed to gain significant engagement. This underscores the difference between merely posting online and reaching and influencing a large audience.
The discovery of these accounts was made possible by new tools developed by OpenAI, which have been enhanced since its last threat report in May. These tools were crucial in detecting the accounts following Microsoft’s earlier revelations.
The broader implications of this discovery highlight the ongoing threat posed by foreign influence operations, particularly in the lead-up to the November election. While the full impact of these operations remains uncertain, the continued vigilance and development of detection tools will be critical in countering such threats.
In a related development, Google also warned about Iranian threat actors targeting the US presidential elections. This follows Microsoft’s earlier findings and adds further evidence of these actors’ persistent efforts to interfere with the election process. Google’s report identified a threat group known as APT42, which has targeted various organizations connected to the US elections through phishing attacks and social engineering tactics. These attacks have included attempts to compromise the Gmail accounts of high-profile individuals associated with the Trump and Biden campaigns.
APT42’s activities are believed to be connected to the Islamic Revolutionary Guard Corps (IRGC), and their campaigns have extended beyond the US to include targets in Israel and other sectors like military, defense, and academia. While some of these attacks have been successful, efforts to protect critical individuals and prevent further breaches are ongoing. The continued threat underscores the need for vigilance as the election approaches, with the possibility of increased activity from foreign influence operations remaining a significant concern.