As AI-generated content becomes increasingly sophisticated, YouTube has unveiled a new AI likeness detection tool aimed at identifying and flagging deepfakes of creators, artists, and celebrities. Artists and celebrities worried about AI-generated deepfakes of themselves can take a little comfort from YouTube, which has begun rolling out a detection tool for AI-generated likenesses of people.
The video streaming platform has been developing the tool in partnership with Creative Artists Agency (CAA), suggesting that one of its functions eventually will be to recognize celebrity deepfakes.
However, for now, the tool is limited to a specific set of creators with YouTube channels and will be rolled out to all creators in the YouTube Partner Program in the coming months, PC Mag reported.
YouTube notified select creators of the rollout by email on Tuesday (October 21), according to The Verge. To use the tool, creators will have to provide a photo ID and a short video of themselves. Several days after signing up, creators should start seeing flagged videos in the “Content Detection” tab, where they’ll have options to respond — including filing a removal request, filing a copyright infringement claim, or taking no action.
In the video announcement, YouTube emphasized the tool’s purpose: to protect the reputation and commercial interests of creators on its platform. The tool will allow creators to “protect your viewers by keeping the audience from being misled about what you endorse and what you don’t,” YouTube said.
“You will be able to send a removal request for review under YouTube’s privacy guidelines and protect your viewers by keeping your audience from being misled about what you endorse and what you don’t.” — YouTube
YOUTUBE added:
“There is a chance you may not see matches if altered or synthetic content with your face are barely or never uploaded to YouTube. This is completely normal and indicates that we haven’t detected unauthorized use of your visual likeness on the platform. And we hope that brings you peace of mind.”
The tool operates similarly to YouTube’s Content ID system, which scans the platform for copyrighted content to compensate rightsholders. This new deepfake detection system applies the same principle — but instead of music or video rights, it identifies recognizable faces used without permission.
YouTube has not disclosed if or when it plans to expand the tool beyond creators on the platform. However, that’s the expectation, given that YouTube partnered with CAA last December, giving certain high-profile people the ability to “provide critical feedback to help us build our detection systems,” YouTube said at the time.
The detection tool is one of several recent steps YouTube has taken to mitigate risks tied to generative AI. Last year, the platform updated its privacy policies to allow members of the public to file removal requests for videos that imitate their voice or likeness. It also introduced a system allowing rightsholders to request takedowns of videos they believe “mimic an artist’s unique singing or rapping voice.”
At the same time, YouTube and parent company Google have been among the most active players in developing AI technology. Earlier this year, YouTube rolled out an AI music tool capable of generating copyright-free soundtracks for creators and began testing an AI music host to rival Spotify’s AI DJ.
In 2023, YouTube signed a deal with Universal Music Group (UMG) to develop AI tools with built-in protections for rights holders. According to a 2024 report by the Financial Times, YouTube was also in talks with major record companies to license their music for AI training — though progress was reportedly slowed by a lack of willing artists.