YouTube has opened its AI likeness detection tool to the entire entertainment industry — free of charge. Celebrities, talent agencies, and management firms can now upload a person's likeness into the system, and YouTube will automatically scan the platform for AI-generated replicas and flag them for the team to review.
The rollout has been phased. YouTube first piloted the technology with CAA in 2025, then extended it to creators in the YouTube Partner Program in September, then to journalists, government officials, and political candidates in March 2026. This week's announcement removes the gating entirely for public figures — a subject can opt in even if they don't maintain a YouTube channel.
The detection loop is straightforward on the surface: the rights-holder uploads likeness data, YouTube's models flag suspected AI replicas across uploads, and the subject's team decides whether to leave each flagged video up or file a removal request. Removal is not automatic or guaranteed — parody, satire, and other fair-use cases remain protected under the community guidelines, and YouTube reviews requests individually.
The context matters. Deepfake-driven impersonation has moved from a niche problem to a mainstream one for public figures, with AI-generated political endorsements, fake interview clips, and synthetic product pitches all surfacing across platforms in 2025 and 2026. By opening the detection pipeline to anyone who can prove they have a claim on a likeness — not just paying creators — YouTube is effectively treating face-data as a first-class IP asset worth enforcement infrastructure.