YouTube is launching tools that it says will allow creators to harness AI’s creativity possibilities while maintaining control of their likeness.
The two new tools will help creators to find content which uses AI-generated or synthetic likenesses of themselves – whether that content is similar to their faces or their voices.
The announcement follows the July reveal of guidelines for creators, explaining how they can easily report content that “used AI to alter or create synthetic content that looks or sounds” like them.
Fighting the Rise of the Deepfake
Details of the new tools have been shared by Amjad Hanif, Vice President of Creator Products at YouTube, who emphasizes that there must be a balance between creativity and safeguarding creators’ IP, which includes their likenesses.
The two tools will “equip them with the tools they need to harness AI’s creative potential while maintaining control over how their likeness, including their face and voice, is represented,” he explained.
This just in! View
the top business tech deals for 2024 👨💻
Helping Creators Protect Themselves
The first tool is an add-on to the Content ID tool, which is accessible to YouTube’s Studio Content Manager. It will allow users to “automatically detect and manage AI-generated content on YouTube that simulates their singing voices.” Hanif says that the technology is currently being refined and a pilot program will kick off early next year.
Second is a tool that focuses on the visual and will let users detect and manage AI-generated content that shows their faces. Hanif adds that it will impact “people from a variety of industries—from creators and actors to musicians and athletes.”
Balancing Innovation With Responsibility
This isn’t a tirade against AI, though, as Hanif is quick to point out how users have benefitted from AI tools including recommendation tools and creative applications like auto-dubbing. He states: “We remain committed to ensuring that YouTube content used across Google or YouTube for the development of our AI-powered tools is done responsibly.”
Hanif also emphasized that creators have a role to play as any AI-generated content made by users must adhere to YouTube’s Community Guidelines. He writes that creators are responsible for their content and will face action if they fall foul of the criteria.
“This means we may block prompts that violate our policies or touch on sensitive topics,” he writes, adding: “While our primary goal is to empower creativity, we encourage creators to review AI-generated content carefully before publishing it, just as they would do in any other situation.”
The news from YouTube comes just weeks after the US Senate unanimously passed a new bill to allow victims of non-consensual deepfake pornography to sue those responsible. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act isn’t law as yet, but it shows legislators are finally taking action on AI.