To address the potential misleading or harmful effects of generative AI, YouTube announced on Tuesday that it will soon mandate creators to disclose whether a video was produced using generative AI. This update is part of a series of measures to mitigate the impact of AI-generated content.
According to Jennifer Flannery O’Connor and Emily Moxley, YouTube product management VPs, creators will now have new options when uploading content to disclose if it contains realistically altered or synthetic material.
Failure to consistently disclose this information may result in penalties, such as content removal or suspension from the YouTube Partner Program. The announcement also highlighted that artists and creators can request the removal of content, including music, that utilizes their likeness without consent.
Generative AI, which has become more widely accessible, poses an increased risk of deepfakes and misinformation, particularly in the context of upcoming elections. The public and private sectors have recognized the importance of detecting and preventing the malicious use of generative AI.
For example, President Biden’s AI executive order specifically addressed the need for labeling or watermarking AI-generated content. OpenAI is developing a “provenance classifier,” a tool that identifies whether an image was created using its DALL-E 3 AI generator. Meta recently announced a policy requiring political advertisers to disclose the use of generative AI in ads.
When creators upload videos on YouTube, they will be prompted to indicate whether the content “contains realistic altered or synthetic material.” This includes AI-generated videos that realistically depict non-existent events or content showing individuals saying or doing things they did not do.
To inform viewers about AI-generated or altered content, labels will be added to the description panel, with a more prominent label for content involving sensitive topics. Even if the content is appropriately labeled, it will be removed if it violates YouTube’s community guidelines.
Content moderation will be enforced using generative AI technology to create convincingly real-looking fake content and identify and catch content that violates community guidelines. YouTube aims to deploy generative AI to help contextualize and understand threats on a large scale.