Meta, the parent company of Facebook, has announced that it excludes political campaigns and regulated industry advertisers from utilizing its new generative AI advertising tools. This decision comes in response to concerns that such AI-powered tools could amplify the spread of election-related misinformation.
The company revealed this policy update in its help center, stating that advertisers running campaigns related to housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services will not be permitted to use generative AI features. While Meta’s advertising standards already prohibit ads with content debunked by fact-checking partners, the company had not previously established specific rules regarding AI-generated content.
Meta explained that this approach aims to understand better the potential risks associated with using generative AI in ads related to sensitive topics in regulated industries. The company has been testing productive AI ad-creation tools and plans to make them available to all advertisers worldwide next year.
This development follows the growing trend among tech companies, including Meta, to introduce generative AI ad products and virtual assistants in response to the popularity of AI models like OpenAI’s ChatGPT. However, there have been limited details regarding safety measures imposed on these systems.
Alphabet’s Google recently launched similar generative AI ad tools and plans to prevent political content by blocking specific “political keywords” from being used as prompts. Google also intends to require disclosures for election-related ads containing synthetic or inauthentic content.
Snapchat owners Snap and TikTok prohibit political ads, while Twitter’s X has yet to introduce generative AI advertising tools.
Meta’s decision to limit the use of generative AI for political advertising comes in response to concerns about the potential misuse of this technology for election interference, as highlighted by Nick Clegg, Meta’s top policy executive. Clegg emphasized the need to update rules for the use of generative AI in political advertising and suggested that governments and tech companies should prepare for potential misuse of this technology in the lead-up to the 2024 elections.
Meta has previously taken steps to address AI-generated content, such as blocking its user-facing Meta AI virtual assistant from creating lifelike images of public figures and committing to developing a system to “watermark” AI-generated content. The company also restricts misleading AI-generated videos, except for parody or satire.
Meta’s independent Oversight Board is currently examining its approach to AI-generated content, including a case involving a doctored video of US President Joe Biden that Meta left online, asserting that it was not AI-generated.