Meta isn’t the only company struggling with the increase in AI-generated content and how that affects its platform. YouTube also quietly implemented a policy change in June that will allow users to request the removal of AI-generated content or other synthetic content that simulates their face or voice. This change allows users to request removal of this type of AI content as part of YouTube’s privacy request process. This is an extension of its previous Approach announced for a responsible AI program first introduced in November.
Instead of requesting that content be removed for being misleading, like a deepfakeYouTube wants affected parties to directly request removal of the content as a privacy violation. According to YouTube’s recently updated privacy policy Help documentation On the subject, this requires first-party claims outside of a handful of exceptions, such as when the data subject is a minor, does not have access to a computer, is deceased, or other similar exceptions.
Simply submitting a takedown request does not necessarily mean that the content will be removed. YouTube cautions that it will form its own opinion on the complaint based on a variety of factors.
For example, it can determine whether content is disclosed as synthetic or created with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something of value and in the public interest. The company further notes that it can determine whether the AI content features a public figure or other well-known person, and whether or not it shows them engaging in “sensitive behavior” like criminal activity. , violence or support for a political product or candidate. This last point is of particular concern in an election year, where AI-generated endorsements could potentially swing votes.
YouTube says it will also give the content uploader 48 hours to act on the complaint. If the content is deleted before this deadline, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means complete removal of the video from the site and, where applicable, removal of the individual’s name and personal information from the video title, description, and tags. Users can also blur people’s faces in their videos, but they cannot simply make the video private to comply with the removal request, as the video could revert to public status at any time.
The company did not widely announce the policy change, however. in March, it introduced a tool in Creator Studio which allowed creators to disclose when realistic-looking content was created with modified or synthetic media, including generative AI. It is also more recently started testing a feature this would allow users to add collaborative notes that provide additional context about the videos, for example if they are meant to be a parody or if they are misleading in some way.
YouTube is not against the use of AI, having already experimented with generative AI itselfincluding a summary of comments and a conversational tool to ask questions about a video or get recommendations. However, the company has previously warned that simply labeling AI content as such will not necessarily protect it from removal, as it will still need to comply with YouTube’s Community Guidelines.
In the event of privacy complaints regarding AI material, YouTube will not rush to penalize the creator of the original content.
“For creators, if you receive a notification about a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines warnings, and receiving a privacy complaint will not automatically result in a warning,” a company representative said last month. sharing on the YouTube community site where the company directly informs creators of new policies and features.