YouTube has begun launching a new AI-based tool designed to detect videos that feature a creator’s face without their approval. The tool, called AI Likeness Detection, allows creators to know if someone has used their likeness or altered their face using technologies such as deepfake — a technology that makes it possible to create videos that look completely real, even when they were never actually filmed.

According to YouTube, the tool is intended to protect the identity of content creators and prevent situations where viewers are exposed to fake videos that could mislead them. The new feature is available through YouTube Studio, under a tab called Content Detection. To use it, creators must first go through a verification process that includes uploading a photo of an ID card and a short video recording of themselves, so the system can accurately recognize them. Afterwards, if the system detects a video that uses their likeness — meaning their face or image — they will receive a notification with all the details: The video’s title, the channel that uploaded it, the number of views, and transcripts of relevant parts of the dialogue. Next to each video, there will also be an option to submit a removal request.

The system allows for two main actions — submitting a request to remove AI-generated videos created without permission, or requesting the removal of videos that violate copyright laws if protected content was used. In the first stage, the tool is being made available to members of the YouTube Partner Program, and access will gradually expand over the coming months. According to YouTube, the first creators to gain access will be those at higher risk of identity misuse, and the company plans to provide full access to all monetized creators by January 2026.

The move is part of a growing wave of initiatives led by YouTube around identifying and ensuring the authenticity of content created with artificial intelligence. In recent years, deepfake videos have become easier to produce and more widespread than ever — and they are sometimes used not only for entertainment, but also to create false impressions or spread misinformation.