On Monday, YouTube unveiled a new update that will give its content creators more control over third-party artificial intelligence (AI) training. The decision comes after the video-streaming giant unveiled new measures to safeguard creators from fake content that duplicate their likenesses, such as their looks and voices. The new option will allow content creators more choice to choose whether or not third-party AI firms can access their movies to train large language models (LLMs). They can also provide authorization to some AI businesses while prohibiting others from accessing their films. Starting from Monday, yesterday, creators and rights holders will be able to notify YouTube if they allow certain third-party AI businesses to train models on their material.
The new updates will let YouTube Creators Decide Which AI Firm Can Train Models Using Their Videos. Companies are increasingly rushing to collect more data to train AI models and create LLMs. Now that these AI corporations have exhausted publicly available data, they are seeking innovative ways to gather enormous amounts of high-quality data to train models and improve their capabilities.
While some AI businesses have pursued content partnerships, obtaining such data is generally deemed pricey. Another possibility is synthetic data generated by other generative AI models. However, there is a risk that such data will be of low quality, limiting the emergence of newer models.
Today we’re publishing a statement on AI training, signed by 10,000+ creators already:
“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
Signatories include… pic.twitter.com/AqVaEThMs4
— Ed Newton-Rex (@ednewtonrex) October 22, 2024
Creators can opt in to this new functionality through a new setting in the creator dashboard, YouTube Studio. They will see a list of 18 companies from which they can select as authorized to train on the creator’s videos.
The first list of firms includes AI21 Labs, Adobe, Amazon, Anthropic, Apple, ByteDance, Cohere, IBM, Meta, Microsoft, Nvidia, OpenAI, Perplexity, Pika Labs, Runway, Stability AI, and xAI. YouTube states that these companies were picked because they are developing generative AI models and are likely good candidates for a collaboration with creators. However, designers will be able to select a setting that states “All third-party companies,” which implies they can let any third party to train on their data, even if they aren’t listed.
As a result, businesses are attempting to work with content creators to obtain new, high-quality data to train AI models. For example, Grok is currently trained on public posts on X (previously known as Twitter), whereas Meta AI is trained on public posts on Facebook and Instagram.
Given the vast volume of human-created data on YouTube, it has become an attractive platform for AI startups. This data is becoming increasingly relevant as video creation techniques improve. To safeguard creators, the video-streaming behemoth has previously prohibited firms from crawling and scraping videos without permission.
In a support post, the business unveiled a new option that will allow platform content producers to choose whether or not to grant any AI firm access to their movies for LLM training. YouTube plans to release an update in the coming days that will include a new option in Studio Settings’ “Third-party training” area.
Creators that have administrator access to the YouTube Studio Content Manager are also eligible, according to the business. They will also be able to access and alter their third-party training settings from within their YouTube Channel settings at any time.
With the gradual rise of AI technology, particularly AI videos like OpenAI’s Sora, YouTube producers complained that corporations such as Apple, Nvidia, Anthropic, OpenAI, and even Google itself, among others, had trained AI models on their content without their knowledge or remuneration. This fall, YouTube announced that it would solve the issue in the near future.
YouTube emphasizes that only videos that have been approved by the creators and relevant rights holders will be eligible for AI training. Furthermore, the company’s terms of service still apply, which means that AI businesses cannot illegally collect films from the network.
This new option makes no mention of payments from AI firms or corporations to producers for exploiting their movies. However, YouTube stated that it will continue to support new forms of collaboration between creators and third-party companies.
However, while the new setting blocks third-party access, Google tells TechCrunch that it will continue to train its own AI models on some YouTube material in compliance with its existing agreement with artists. The new setting also does not modify YouTube’s Terms of Service, which prevent third parties from unlawful access to creator content, such as scraping.
Instead, YouTube envisions this functionality as a first step toward making it easier for creators to allow firms to train AI on their films, possibly in exchange for compensation. In the future, YouTube will most likely take the next step in this process by letting companies that creators have permitted to obtain direct downloads of their videos.
With the feature’s introduction, the default setting for all creators will be to not enable third parties to train on their films, making it clear to companies who have previously done so that they did so against the artists’ desires.
YouTube was unable to explain whether the new setting would have any retroactive effect on any third-party AI model training that had occurred. However, the corporation claims that its Terms of Service prohibit third parties from accessing creative content without authority.
The company first announced plans to provide creator controls for AI training in September, along with new AI detection tools aimed at preventing creators, artists, musicians, actors, and athletes’ likenesses, including their faces and voices, from being copied and used in other videos. According to the firm, the detection technology would build on YouTube’s existing Content ID system, which was previously limited to copyright-protected content.
Creators throughout the world will be notified about the new functionality via banner alerts in YouTube Studio on desktop and mobile in the coming days.
Separately, Google’s AI research centre DeepMind introduced a new video-generating AI model, Veo 2, on Monday, aiming to compete with OpenAI’s Sora.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.