"The probability that advanced A.I. will destroy or catastrophically harm humanity is 70 percent... The world isn’t ready, and we aren’t ready, and I’m concerned we are rushing forward regardless and rationalizing our actions" A bit frightened by these statements made by Daniel Kokotajlo, a former researcher at OpenAI's governance department. Pursuing profits over safety is an age-old story in the corporate world; and the fallout of a lack of ethical behaviour is more dangerous than ever here. None of us know what's going on inside these companies, and that's why employees being able to freely express concerns and criticism is so important. I hope to see new operational processes enabling open dialogue to be put in place because these reports are... troubling, to say the least. Great read for anyone interested in AI and corporate governance: https://lnkd.in/gs9hQerg
Jaden Gregory’s Post
More Relevant Posts
-
A consortium of current and former employees from leading AI companies, including OpenAI and Google DeepMind, issued a warning about the potential risks associated with #AI technology. In an open letter, they expressed concerns about the financial motives of AI companies hindering effective oversight, highlighting risks from unregulated AI, such as misinformation spread and the deepening of existing inequalities. The letter also underscored instances of image generators producing disinformation despite company policies against such content. The group urged AI firms to establish channels for employees to raise risk-related concerns and refrain from enforcing confidentiality agreements that stifle criticism. For further insights, check this excellent article from the New York Times: https://lnkd.in/dnhDyjvW #artificialintelligence #ainews #regulation
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
Former employees of OpenAI, including Ilya Sutskever and Mr. Kokotajlo, have ties to the effective altruism movement and are concerned about existential threats from AI. Mr. Kokotajlo believes there is a 50% chance that advanced AI (AGI) will arrive by 2027 and a 70% chance that it will harm humanity. OpenAI's safety protocols, including the "deployment safety board," have been criticized for not effectively slowing down the release of potentially risky AI models. Mr. Kokotajlo expressed concerns about OpenAI's focus on improving models rather than prioritizing safety and eventually left the company. OpenAI has started training a new flagship AI model and is forming a safety and security committee to address risks associated with the new model and future technologies. Mr. Kokotajlo and other former OpenAI employees are calling for an end to nondisparagement and nondisclosure agreements, as well as the establishment of a reporting process for safety-related concerns. They have retained Lawrence Lessig, a legal scholar and activist, to support their cause and are advocating for regulation of the AI industry. OpenAI claims to have avenues for employees to express concerns, including an anonymous integrity hotline, but the former employees are skeptical of self-regulation and believe there should be a transparent and democratically accountable governance structure for AI. #openai #advancedai #agi #risk #safetyprotocals #governancestructure #samaltman
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
HOW DO WE MAKE SURE THAT AI COMPANIES ARE TRANSPARENT AND ACCOUNTABLE? The list of whistleblowers from AI companies just keeps getting longer. But former OpenAI employee Daniel Kokotajlo has specific recommendations (open letter) that make sense for the governance of AI for safety.
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
The Reckless Race for AI Dominance 🚨 Today's New York Times article delves into the high-stakes environment at OpenAI, highlighting the company's relentless pursuit of AI dominance. Initially, it seems reminiscent of the breakneck pace typical in Silicon Valley, where speed is everything (like in my previous posts). However, the narrative evolves into a profound dilemma: the delicate balance between advancing technology for humanity's benefit and the risks that come with it. Key departures, including researchers Ilya Sutskever and Han Leike, though not part of the whistleblowing group, have voiced their concerns. Enter Mr. Kokotajlo, a 31-year-old governance researcher who left OpenAI a month ago, citing a loss of confidence in the company's responsible behavior. His words resonate: "The world isn't ready, and we aren't ready." In Mexico, we say, "Cuando el río suena, agua lleva," meaning "When you hear the river, there's water in it." These recurring concerns amplify the urgency of the conversation around AGI (Artificial General Intelligence). It seems AGI is approaching faster than anticipated, and we must strive to achieve it responsibly, safely, and for the betterment of all humanity. Let's hope we tread this path with the utmost care and respect.
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers. #aithreats #openai #google #ai #artificialintelligence #aistartups #aibusiness #artificialintelligencetechnology #openletter #ai4good #airace #aiethics #humantouch #superintelligence #aievolution #aidriven #futureofai #aidevelopment #aiadoption #aiapplications #technologyinnovation #aievolution #airisks #airesponsibility
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
It's often the case that the pace of human innovation is far faster than the pace of responsible public governance, especially in our deeply fractured political environment. Slow policymaking can be beneficial when it comes to being thorough, but so many nefarious cats get out of the bag due to this pace mismatch. What do you think can be done about this as it relates to AI? How do we pick up the pace of bipartisan policy development before its too late? https://lnkd.in/e3b-cTGu
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
suppression of dissent within organizations working on breakthrough AI technologies must be addressed with this “right to warn”; we need more whistleblowers working at the cutting edge of AGI, we should not rely solely on government controls and corporate self-regulation https://righttowarn.ai/
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
A fascinating article from NEWYORKTIMES.COM on Open AI with "insiders" warning of reckless behavior. In the race to win AI, it would be fair to say that much of what is being done has a whiff of recklessness. We have already railroaded artists, writers, and original thinkers (unpaid mind you) to create the building blocks of some of the most powerful AI brands in the world. What are your thoughts? #ai https://lnkd.in/ggrU5eFb
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
Imagine working at OpenAI or Google DeepMind, believing in the potential of #AI while also being one of the few who can hold these companies accountable—yet fearing retaliation if you speak up. An open letter from employees at these companies highlights a troubling trend: silencing crucial voices in an unregulated tech landscape. The risks are high. Employees worry about increasing inequalities, misinformation, and losing control of autonomous AI systems. “It’s really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns,” said Daniel Ziegler, a former OpenAI employee. This sentiment aligns with the values we strive for in employee relations. It’s not just about AI; it's vital across all industries. Creating a safe environment for employees to voice their concerns isn’t just ethical; it’s good business. As we navigate new technological frontiers, we must foster a culture where whistleblowing is protected and encouraged for #accountability and improvement. At HR Acuity, we believe that the foundation of any successful company is its people. Ensuring their voices are heard and protected is essential. As we discuss AI, we must commit to principles that uphold the dignity and rights of employees. This commitment is crucial for ethical leadership and organizational success. #CorporateEthics #WhistleblowerProtection #HRLeadership #EmployeeRelations https://lnkd.in/epyQBAVV
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com
To view or add a comment, sign in
-
🚨 Breaking News: SEC Investigates OpenAI's Sam Altman 🚨 In a turn of events that's grabbing the tech world's attention, the U.S. Securities and Exchange Commission (SEC) is reportedly investigating Sam Altman, CEO of OpenAI, for potentially misleading investors. This development follows internal scrutiny and a brief period in November when Altman was removed and then reinstated as CEO amid concerns over transparency with the OpenAI board. 🕵️♂️ This probe is not just about leadership dynamics; it's a deep dive into the integrity of investor communications and document preservation practices at OpenAI, a company at the forefront of artificial intelligence innovation and valued at over $80 billion. With major stakes and investments, including a $13 billion commitment from Microsoft, the outcome of this investigation could have far-reaching implications for the AI industry. 📊 #OpenAI #SamAltman #SECAudit #TechNews #AI #CorporateGovernance #InvestorRelations
To view or add a comment, sign in