
It's understandable that many find generative AI unsettling. Not long ago, OpenAI introduced an image generation tool capable of creating artwork in the Studio Ghibli style and rendering text with impressive accuracy.
More troubling is how scammers have used this technology to forge convincing fake documents, including fraudulent job offers and misleading social media ads for cryptocurrency schemes. Generative AI also opens the door for identity theft. In one case, Anne, a 53-year-old woman from France, lost €830,000 (around $850,000) after being deceived by fraudsters pretending to be actor Brad Pitt.
These scammers relied on AI-generated images and messages to build a fake romantic relationship with Anne. Eventually, they convinced her to send large sums of money for supposed medical expenses.
To address growing concerns around deepfakes, a bipartisan bill titled Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) was introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), and others, with support from YouTube. The bill aims to:
- Hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance;
- Hold platforms liable for hosting an unauthorized digital replica if the platform has actual knowledge of the fact that the replica was not authorized by the individual depicted;
- Exclude certain digital replicas from coverage based on recognized First Amendment protections; and
- Largely preempt State laws addressing digital replicas to create a workable national standard.
In a recent blog post, YouTube said:
We're proud to support this important legislation, which tackles the growing problem of harm associated with unauthorized digital replicas. AI-generated content simulating a person’s image or voice can be used to mislead or misrepresent. The NO FAKES Act, alongside other legislative efforts we support like the TAKE IT DOWN Act, provides clear legal frameworks to address these issues and protect individuals' rights.
To help reduce deepfakes, YouTube has updated its privacy process. Anyone uncomfortable with altered or synthetic content that mimics them can now request a takedown.
YouTube also highlighted the likeness management system it introduced last year. This system helps creatives monitor how their image, including their face, is being used with AI across the platform.
In the past year, YouTube has rolled out several generative AI features. For instance, in November, it tested a tool that allows creators to remix licensed songs and generate unique 30-second tracks for Shorts.
Earlier, it also introduced Dream Track, an experimental feature that lets users generate soundtracks using AI voices of artists like John Legend, Sia, Charli XCX, and Troye Sivan.
1 Comment - Add comment