A consortium of current and former employees from leading AI companies, including OpenAI and Google DeepMind, issued a warning about the potential risks associated with #AI technology. In an open letter, they expressed concerns about the financial motives of AI companies hindering effective oversight, highlighting risks from unregulated AI, such as misinformation spread and the deepening of existing inequalities. The letter also underscored instances of image generators producing disinformation despite company policies against such content. The group urged AI firms to establish channels for employees to raise risk-related concerns and refrain from enforcing confidentiality agreements that stifle criticism. For further insights, check this excellent article from the New York Times: https://lnkd.in/dnhDyjvW #artificialintelligence #ainews #regulation
Artificial Intelligence News - The Intelligent Times’ Post
More Relevant Posts
-
“When I [joined] for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” I guess this is what #TheOtherShoe dropping sounds like during Open AI's #NoGoodVeryBad month (#ScarJo scandal; disposition of "#superalignment" safety team, senior exec exits, new info about the Sam Altman's firing, info about his personal AI investments, etc). The bigger takeaway, though, is that #AI #ethics, #safety, and #governance are real issues for ... - Those racing to become "the next Google" and establish public #LLM supremacy; - Every company that's going to use AI to enhance its business capabilities; - And really, for society as a whole. This will require a multi-pronged approach that touches every aspect of a company's business operating model, and a more active approach to legislation and regulation. For those who are curious, IBM has been working on addressing these challenges for years. We have a comprehensive approach to AI Governance, the leading AI Governance platform (watsonx.gov), and have been active participants in multiple legislative and regulatory efforts. See link to our main AI Ethics page in the comments. https://lnkd.in/eX6CVBxK
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
We definitely need a multipronged approach - not to say governance, best practices guidelines/tools and legislation are enough, but greater awareness by everyone - including the media and teams who are embedding AI in our products and processes - to critically look at these issues before they become a threat. It's been frustrating to see the discussion over the ethics of AI. I know personally in discussions I've had that I'm often seen as a Luddite for raising concerns and find the idea of synthetic users, for example, as a poor choice in a climate that too often doesn't seem to value proper research (i.e. not rushed). My frustration is that whistleblowers and media analysts are taken seriously when they raise concerns, but individual contributors and practitioners are not given the benefit of the doubt when they raise concerns. I never doubted my undergraduate in applied ethics would go to waste - I spent time diving into the ethics of business and went on to a graduate school program that stresses thinking through the impact of our work, something we often aren't doing. I worry we're going to need continued examples of whistleblowers stepping forward, because I see a lot of people not thinking critically and seem to be personally offended by any discussions of very real concerns. AI isn't just a cute little chatbot - there is a lot more happening at an accelerated rate, and that rate in a time of permanent VUCA (volatility, uncertainty, complexity, and ambiguity) should be a concern for all.
“When I [joined] for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” I guess this is what #TheOtherShoe dropping sounds like during Open AI's #NoGoodVeryBad month (#ScarJo scandal; disposition of "#superalignment" safety team, senior exec exits, new info about the Sam Altman's firing, info about his personal AI investments, etc). The bigger takeaway, though, is that #AI #ethics, #safety, and #governance are real issues for ... - Those racing to become "the next Google" and establish public #LLM supremacy; - Every company that's going to use AI to enhance its business capabilities; - And really, for society as a whole. This will require a multi-pronged approach that touches every aspect of a company's business operating model, and a more active approach to legislation and regulation. For those who are curious, IBM has been working on addressing these challenges for years. We have a comprehensive approach to AI Governance, the leading AI Governance platform (watsonx.gov), and have been active participants in multiple legislative and regulatory efforts. See link to our main AI Ethics page in the comments. https://lnkd.in/eX6CVBxK
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
🚨 Something’s Rotten in the State of AI: Is OpenAI the New Theranos? 🚨 ☁️ An atmosphere of fear, secrecy, and intimidation is sweeping the AI industry. Employees at top AI firms know secrets about their systems' capabilities and risks that no one else does. But they're gagged by NDAs and face crushing penalties for speaking out. 💡 Yesterday, current and former OpenAI staff dropped a bombshell letter, backed by leading AI figures like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell. It underscores concerns from my recent piece, "What Did Sutskever and Leike See that Made Them Leave?" ✨ Interestingly, this comes almost three weeks after OpenAI's internal memo ditching NDAs. Still, new revelations on CEO Sam Altman’s cult-like grip add fuel to the fire 🔥: 1. Fear and secrecy are still alive at OpenAI; they’re just using subtler tactics now. 2. Like Elizabeth Holmes at Theranos, Sam Altman is: - Creating a culture of extreme secrecy and intimidation. - Crushing any dissent or criticism. - Demanding absolute loyalty while portraying himself as a visionary genius. 3. The letter isn’t just signed by OpenAI folks; it includes voices from DeepMind and Anthropic too. This suggests fear’s got its claws deep into the whole AI industry. 📰 Check out the full scoop here: NY Times Link (https://lnkd.in/dWhgBu-R) #OpenAIGate #TechDrama #Whistleblowers #AICrisis
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
A fascinating article from NEWYORKTIMES.COM on Open AI with "insiders" warning of reckless behavior. In the race to win AI, it would be fair to say that much of what is being done has a whiff of recklessness. We have already railroaded artists, writers, and original thinkers (unpaid mind you) to create the building blocks of some of the most powerful AI brands in the world. What are your thoughts? #ai https://lnkd.in/ggrU5eFb
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
AI is everywhere. This post was written using LinkedIn’s built-in AI assistance. But when do we consider it to be dangerous? How do you see AI changing your business? Read more about the warnings from OpenAI insiders: [Link to the article](https://lnkd.in/gZ7Yhiwy)
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://dnyuz.com
To view or add a comment, sign in
-
HOW DO WE MAKE SURE THAT AI COMPANIES ARE TRANSPARENT AND ACCOUNTABLE? The list of whistleblowers from AI companies just keeps getting longer. But former OpenAI employee Daniel Kokotajlo has specific recommendations (open letter) that make sense for the governance of AI for safety.
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
A group of OpenAI insiders have recently blown the whistle, raising serious concerns about a "culture of recklessness" at the AI powerhouse. Their allegations highlight a fundamental tension in the AI industry: the race to develop increasingly powerful AI systems versus the responsibility to ensure these systems are safe and ethically deployed. This tension is not new, but it's becoming increasingly urgent as AI capabilities advance at an unprecedented pace. The whistleblowers' concerns are a wake-up call, underscoring the need for greater transparency, stronger safety measures, and more robust ethical guidelines in AI development. The implications of these concerns are far-reaching. OpenAI is a leading player in the AI field, and its actions have a ripple effect across the industry. If OpenAI prioritizes speed over safety, it could set a dangerous precedent for other companies, potentially leading to a race to the bottom on AI safety standards. On the other hand, if OpenAI takes these concerns seriously and doubles down on safety and ethical considerations, it could set a new standard for responsible AI development, demonstrating that innovation and responsibility can go hand in hand. The path OpenAI chooses will have a profound impact on the future of AI, and it's a decision that we all have a stake in. #AI #OpenAI #AIethics #ResponsibleAI
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers. #aithreats #openai #google #ai #artificialintelligence #aistartups #aibusiness #artificialintelligencetechnology #openletter #ai4good #airace #aiethics #humantouch #superintelligence #aievolution #aidriven #futureofai #aidevelopment #aiadoption #aiapplications #technologyinnovation #aievolution #airisks #airesponsibility
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
suppression of dissent within organizations working on breakthrough AI technologies must be addressed with this “right to warn”; we need more whistleblowers working at the cutting edge of AGI, we should not rely solely on government controls and corporate self-regulation https://righttowarn.ai/
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
64 followers