YouTube has announced the expansion of its AI-powered likeness detection technology to a pilot group of government officials, political candidates, and journalists as part of a broader effort to address the growing risks posed by synthetic media. The technology, first introduced last year to approximately four million creators in the YouTube Partner Program, identifies AI-generated deepfakes by detecting simulated faces produced with generative AI tools.
Participants in the pilot program will gain access to a system that detects unauthorized AI-generated videos and allows them to request removal if the content violates YouTube’s policies. Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, indicated that the initiative is designed to protect the integrity of public discourse while balancing free expression.
The detection system operates similarly to YouTube’s Content ID technology, which identifies copyrighted material. Requests for removal will be evaluated under existing privacy guidelines to determine whether content qualifies as parody or political critique.
Eligible users will verify their identity through government identification before accessing the tool. Amjad Hanif, Vice President of Creator Products at YouTube, said the company will also label AI-generated videos and may eventually expand the technology to detect synthetic voices and other intellectual property.




