YouTube is adding new rules surrounding content generated or manipulated by artificial intelligence, including labeling requirements.
The Google-owned video platform announced Tuesday in a blog post that it will be rolling out a series of updates over the next few months, including requiring creators to disclose whether their content has been AI-generated upon uploading it, which will add a label to the video alerting viewers.
YouTube gave the example of AI-generated videos realistically depicting a purported event that never happened, or showing a person saying or doing something they didn't, adding, "This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials."
Creators who repeatedly fail to disclose AI-generated content under the new rules face having their content taken down from YouTube, the company said.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
The new labels will appear in the description of videos where AI was used, and content about sensitive topics will have a second, more prominent label added to the video player.
The video platform said it is also going to begin allowing people to request the removal of AI-generated content that "simulates an identifiable individual" including the use of their face or voice, through the company's privacy request process. However, not all the requests will be honored.
GOOGLE RELEASES ‘AI OPPORTUNITY AGENDA’ FOR POLICYMAKERS
"Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests," the blog post reads. "This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar."
YouTube also said that it would be "building responsibility" into its AI tools and features.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"We’re thinking carefully about how we can build upon years of investment into the teams and technology capable of moderating content at our scale," the announcement said. "This includes significant, ongoing work to develop guardrails that will prevent our AI tools from generating the type of content that doesn’t belong on YouTube."