Andrew Ng’s Post

View profile for Andrew Ng, graphic
Andrew Ng Andrew Ng is an Influencer

Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of Landing AI

The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I'm seeing many new applications in education, healthcare, food, ... that'll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks. There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy. Responsible AI is important, and AI has risks. The popular press narrative that AI companies are running amok shipping unsafe code is not true. The vast majority (sadly, not all) of AI teams take responsible AI and safety seriously. Let's invest more in safety while we advance the technology, rather than stifle progress. A 6 month moratorium is not a practical proposal. To advance AI safety, regulations around transparency and auditing would be more practical and make a bigger difference. 

Josh Bersin

Global Industry Analyst, I study all aspects of HR, business leadership, corporate L&D, recruiting, and HR technology. ✨

1y

totally agree Andrew.

While I agree with you on investing in safety measures without stopping progress is important, I also do believe that the AI revolution we're currently experiencing is unlike any previous revolutions in human history. In the past, there often was a time lag between the inception of a revolution and its widespread impact, which allowed humans to adjust slowly. However, in the information age, where everyone has instant access to real-time information, there will no longer be a time lag in the impact seen - which is self-evident from ChatGPT's swift mass adoption. Given the truly unprecedented nature of the AI revolution and its inevitable far-reaching impact, we all should be prepared to handle the tectonic shift that's forthcoming. In light of that, governments need to be proactive in understanding the technology and legislating regulations that will benefit and ensure communal safety. In doing so, it's also a moral duty for people of your stature to urge governments on holding round-table discussions and collectively devising an adoption model that will merit everyone. Subsequently, we can create a regulatory environment that not only balances the enormous value that AI can create but also address some real risks it poses.

Karrie Sullivan

AI Results Without Resistance. ROI in 8 Weeks. Follow me to Hack the Change Curve. Keynote speaker who talks about the psychology of hacking change in AI Adoption & Transformation.

1y

How many women are on the team? How many PoC are on the team? How many people that came from poor or disadvantaged childhoods are on the team? How many dictator/ narcissist/ obstructionist/ sociopath mindsets are on the team? Are there any social good mindsets on the team? Are there ethicists with TEETH on the board and on the team? How do we know the utility is trained to serve and be loyal rather than to compete? I don't want to control the creativity, research or development. I do want to know who's building it and how they think so we can at least gauge whether the intent will ultimately be altruistic or nihilistic when we choose what to participate in.

Mike Serra

Quality Automation Engineer

1y

The concern is not that "companies are shipping unsafe code"; it's that the societal and economic ramifications of this technology are unpredictable and potentially enormous. The government may not understand AI half as well as the engineers developing it, but that is no reason to submit ourselves to governance by tech companies. The whole world needs to have a say in how AI is used and regulated.

Harsh Shah

Associate Partner at ZS - Consulting | Data Science

1y

I agree. This happens with any new revolutionary technology that there are several risks that could potentially come in but the answer is not to halt the progress of that tech but to find ways to systematically reduce or remove those risks When cars were invented people were getting hurt a lot because people didn't know to how to stop the car on time or how to cross the roads etc. We built traffic signals and driving regulations and multiple other things to help make cars safer. People still get hurt and that's never going to stop. Same with this, we build traffic signals and regulations on the use of AI but not halt the progress.

Jacob Ayres-Thomson

Top 50 UK Data Leader, Top 5 UK AI Influencer, Founder@3AI

1y

A moratorium on development would primarily hurt those upon which the restriction is placed, as others race ahead. China would love America to freeze AI dev. It is not a wise move. IMO, there is excessive speculation in AI as to the damage it will cause & the concept that we are releasing a new superintelligent species. To date, every AI created remains a "tool" that serves only its intended purpose, much like a car, telephone or hammer. Machine learning is to neocortex what machinery was to muscle. We have not ceased to use our legs because of cars, planes and boats, instead we have learnt to use each where they are best suited. So far all evidence points to the same relationship with AI. Also, like with machinery, certain jobs for sure will cease to be economically viable. The real solution to the temporary problem of unemployment is investment in education and re-training of people such that their labour re-becomes economically viable. Govt's typically fail at this. We need training to boost human productivity alongside AI. In time, AI will unlock a significantly more productive workforce. Nations that tap into this will lead the future, those that simply try to "smash the machine" are doomed to its past.

Stuart Cranney

Director of Innovation | PhD Candidate (Digital transformation)

1y

💯 I think this is more for the headlines and not grounded in reality at any level.

Álvaro Corrales Cano

Senior Data Scientist | Economist | Former IBM

1y

I agree that simply banning companies to innovate is pointless and unrealistic. However, I don't think relying on companies willingly investing on AI safety is realistic either. Surely many will to an extent, but only insofar as it is profitable for themselves. Letting tech companies police themselves has already been tried, and we have ended up with monopolistic behemoths like Google, Meta and Amazon. Also, the environmental impact of AI is something that companies are not likely to internalise by themselves. We need to find a middle ground.

Prateek Mital

AI, Analytics, Data Science (Consulting, BD, Research, Engineering...) to drive business outcome | ISB | IIT-B|

1y

Though the AI progress should continue, the reason few people are asking for a pause is because the current development is happening with very self-centered approach with little focus on benefit to the society. Responsible technology that is fair, transparent, unbiased and aware needs better collaboration and direction setting. US and china are running to maximize efficiency and productivity, rather than considering the potential social and ethical implications. Time to sit back, relax, meditate and look for the right Karma

Benjamin Kultgen, PhD

AI Ethics Advisor and Researcher | PhD - Ethics / Cognitive Science

1y

You've mischaracterized the moratorium in a way that makes it look obviously indefensible. Yes there are immediate potential benefits from ChatGPT4 to education, healthcare, etc., but the potential risks are largely unknown. This is why you cannot rush a prescription drug to market even though it looks very promising. The moratorium is precisely to balance the huge value of AI vs. its risks, it is not a repudiation of that trade off. The moratorium would be imposed by the government, as they impose regulatory limitations on drug development, or the banking sector. You'd have to argue why such government intervention in this case is anti-competitive, anti-innovation but regulatory constraints on drug development are not, because it is not obvious. Silicon Valley has a tendency to decry regulation as anti-innovation, anti-competition, anti-business, only to end up facing the consequences of deregulation down the line.

See more comments

To view or add a comment, sign in

Explore topics