This matters. Not because this is the end state. But because there are increasing concerns about how these LLMs scrape data and are trained. It started as a legal battle. My hope is that the AI future is a collaborative effort where creators want to opt in. “Yesterday, OpenAI announced in a blog post it is developing a “Media Manager” that will allow artists, creators, and content owners to claim ownership of their works and specify whether or not they want them to be part of training OpenAI’s models. Creatives will ultimately be able to opt out of having their written or visual work included in future AI training datasets—when the tool is released by 2025.” This from Fortune’s Diane Brady - “AI developer OpenAI is developing a new “Media Manager” tool for content creators amid a furore around the ChatGPT creator’s respect for copyright. The tool, set to be released in 2025, will let content creators opt out of letting OpenAI train AI models on their work. Experts suggest the new tool has been created to comply with standards on data mining in Europe’s new AI Act.” #ai #genai Paul W. Kevin Rank, MBA https://lnkd.in/gs3zYNvx
Dwight Pond’s Post
More Relevant Posts
-
I love the not so humble brag at the start of OpenAI and Microsoft's defence submissions in the dispute with the New York Times filed on Monday: "The artificial intelligence (AI) tool known as ChatGPT is many things: a revolutionary technology with the potential to augment human capabilities, fostering our own productivity and efficiency; an accelerator for scientific and medical breakthroughs; a mechanism for making existing technologies accessible to more people; an aid to help the visually impaired navigate the world; a creative tool that can write sonnets, limericks, and haikus; and a computational engine that reasonable estimates posit may add trillions of dollars of growth across the global economy. Contrary to the allegations in the Complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times. " They didn't list that ChatGPT would bring world peace, but that might come later in the proceedings. Fundamentally their argument is one of fair use. https://lnkd.in/en-SrtBZ #newyorktimes #ai #copyright #openai #artificialintelligence
OpenAI says New York Times 'hacked' ChatGPT to build copyright lawsuit
reuters.com
To view or add a comment, sign in
-
For creators, 'their work is being used to train AI systems that may ultimately put them out of business'. 'By destroying the economic incentives for content creation, AI companies risk poisoning the well from which they draw their training data. 'Sustainable AI development requires a new social contract between technology companies and content creators and it can only work with a system where content creators are fairly compensated for their contributions to AI development.' #content #AI #licensing #copyrightlaw
Ex OpenAI Researcher: How ChatGPT’s Training Violated Copyright Law
social-www.forbes.com
To view or add a comment, sign in
-
"Schiff’s bill would not ban AI from training on copyrighted material, but would put a sizable onus on companies to list the massive swath of works that they use to build tools like ChatGPT – data that is usually kept private." If you are using generative AI within your organization (and I'd be willing to bet that's most of us at this point), you should have eyes on this. "Is this legal?" is a massive question that will impact the whole industry, and the ripple effects will include all the companies and individuals using these tools. There are plenty of confident people on both sides of the argument but the truth is...we don't know how any of this will shake out. Understanding how models are trained, where the training data comes from, and the potential ethical and legal concerns of using these tools is the responsibility of every company using generative AI technologies. When in doubt, watch your six. https://lnkd.in/etREQxwi
New bill would force AI companies to reveal use of copyrighted art
theguardian.com
To view or add a comment, sign in
-
How can creators protect their work in an AI-driven world? The issues of AI and copyright are not just theoretical, as they are shaping the future of writing and content creation. #datascience #AI #artificialintelligence https://hubs.li/Q02NQ0HZ0
What’s the State of AI and Copyright in 2024?
https://opendatascience.com
To view or add a comment, sign in
-
How can creators protect their work in an AI-driven world? The issues of AI and copyright are not just theoretical, as they are shaping the future of writing and content creation. #datascience #AI #artificialintelligence https://hubs.li/Q02XGQWb0
What’s the State of AI and Copyright in 2024?
https://opendatascience.com
To view or add a comment, sign in
-
The bill was introduced to Congress this week (April 9) to require transparency from companies regarding their use of copyrighted work to train their generative AI models, such as ChatGPT. Read the article for more: https://ow.ly/P03C50Re65e #US #Copyright #Transparency #AI
US bill demands transparency over copyrighted works to train genAI
worldipreview.com
To view or add a comment, sign in
-
When an #AI is trained, is it just reading and remembering content like humans? Put another way, we don't plagiarize after we read an article and then synthesize the content of that article when we talk with friends. Is that what AI is doing too? A Federal District Court judge found that OpenAI's use of publicly available data for training purposes was more akin to our reading and remembering than just copying. According to he court, the training did not cause adequate damages to the plaintiffs to support the lawsuit. “Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote,” McMahon wrote. This is potentially a very big deal. The lawsuit doesn't claim to answer all of the legal questions around #AI and training data, but it is an interesting approach. What do you think, is this the right way to think about AI training? https://lnkd.in/ej8JP98z
OpenAI defeats news outlets' copyright lawsuit over AI training, for now
reuters.com
To view or add a comment, sign in
-
How can creators protect their work in an AI-driven world? The issues of AI and copyright are not just theoretical, as they are shaping the future of writing and content creation. #datascience #AI #artificialintelligence https://hubs.li/Q02NRPb00
What’s the State of AI and Copyright in 2024?
https://opendatascience.com
To view or add a comment, sign in
-
🚨 The AI Revolution Needs Transparency! 🚨 Did you know that generative AI systems can whip up text, images, videos, and even computer code based on just a simple prompt? 🤯 But there’s a catch… what's actually happening in that 'black box' of AI training is a mystery. Imagine trusting an autopilot without knowing how it can handle turbulence. This is what we're facing with AI. 🔍 Creators are now calling on Ottawa to demand disclosure on how these AI systems, like OpenAI's ChatGPT, are trained. Why? Because transparency is crucial for innovation that we can trust. Think about it: - 📈 Enabling better, more ethical AI development. - 👥 Building trust with consumers who use these AI-powered tools. - 💡 Fostering a more informed and involved tech community. This could reshape our digital future! Check out the full article for more insights: [Read here](https://lnkd.in/g_J-zPCe) What's your take on AI transparency? Let's discuss! ⤵️ #AI #TechTransparency #Innovation #EthicalAI
Creators urge Ottawa to force disclosure of 'black box' AI system training
moosejawtoday.com
To view or add a comment, sign in