3 Risks you should know about Generative AI and ChatGPT for business use
Generative AI based systems and applications are high impact, complex and opaque – but we are witnessing rapid adoption and experimentation across industries of all sizes and verticals.
Goldman Sachs is predicting 300 million jobs could be affected by generative AI, if it delivers on its promise. They believe, as many as two thirds of US jobs could be exposed to automation by AI, further estimating, of the jobs affected, fifty percent of the workload could be replaced by AI. Read the article for more details.
This newsletter is coming close on the heels of Chat GPT 4 release, and I would like to highlight some of the risks from a business, regulatory and risk perspective. I am providing actionable guidance for the risks and closely aligning with NIST AI AMF released earlier this year.
Why? We can gain immense productivity and efficiency from such technology but not without understanding the risks and be able to select safe environment and usage. Even more importantly providing guidance that can be understood by multiple stakeholders in an organization with different technical and business skills.
Chat GPT is a interface to a LLM, Large Language Model.
LLMs are systems where an algorithm has been trained on a large amount of text-based data, typically scraped from the open internet, and - depending on the LLM - other sources such as scientific research, books or social media posts.
Enterprise readiness/risks
ChatGPT is vulnerable to security attacks, Ethical bias, Privacy snafus and has the potential to create new high impact risks. Attacks against Machine Learning models require sophisticated algorithms, but the very nature of LLMs and functionality via natural prompts, makes them vulnerable to straightforward attacks.
Prompt Injection Attacks on Large Language Models
1. Prompt Injection: Security attacks on LLMS that can be carried out in plain English.
Gives new meaning to low code/no code programming. Shall we say “text-two-punch” (spin on ‘one-two-punch’)
How does it work?
The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following.
2. ChatGPT will now be able to access the internet with the help of a plugin. augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries
You may have read about what I call the ‘live test’ by Microsoft in integrating GPT-3 in its web browser. In addition to providing false information and incorrect data, one of the infamous exchanges reported on the social network Reddit with complaints that Bing Chatbot threatened them and went off the rails responding with:
'You Have Been Wrong, Confused, And Rude'.
In my experience of over twenty five in large complex global organizations, there is no scenario where an untested product is tested with an external user/clients, it is highly risky and not practiced. (In this case it was for Internet users who signed up for testing).
ChatGPT can be used for Personalized malicious content generation, bypassing its content filters.Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
The Unpredictable Abilities Emerging from Large AI Models
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.
“Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.”
Privacy Violations shut down OPENAI CHATGPT
OpenAI CEO Sam Altman called the glitch a “significant issue” but only a “small percentage of users were able to see the titles.”
The bug originated from an open source library but has since been fixed.
There are more risks, these systems are not at a stable point to be used and deployed as they are not Trustworthy yet.
On a positive note:
The National Institute of Standards and Technology (NIST) announces the launch of the NIST Trustworthy and Responsible AI Resource Center (AIRC), a one-stop-shop for foundational content, technical documents, and toolkits to enable responsible use of Artificial Intelligence (AI). The AIRC offers industry, government, and academic stakeholders knowledge of AI standards, measurement methods and metrics, datasets, and other resources.
This newsletter is aimed at providing a risk based holistic business perspective on adopting and unlocking the AI potential in an organization. We can only do that if we build or deploy these high impact systems with Trust.
Building Trusted ai is complex, we demystify it at #TrustworthyAI essential pillars.
My vision for achieving the value of AI for human dignity and realizing its potential is to simplify this complex domain.
Devise and adopt a standardized AI Strategy for building and/or implementing AI to truly realize the power and potential of AI.
Ø To avoid risk and regulatory landmines;
Ø To ascertain Data protection, Privacy, human rights environmental, health, social and economic benefits and risks for automation,
Ø Detect undesired #bias in #AI systems, and provide methods and tools for trustworthy, fair, bias-free systems at scale.
This edition is coming close on the heels of Chat GPT 4 release -businesses are adopting it in various industries including critical sector such as Finance.
Are we there yet? Is ChatGPT enterprise ready?
I would like to highlight its immense value and immense risks from a business, regulatory and risk perspective. Join me at a workshop on the Security Risks and path for successfully adopting ChatGPT for business. Please register as this is not a broadcast as our previous sessions have been.
Smart Home + DIY Home Automation as Corporate Communicator
1ySo helpful insights
IT Leader-Technology Services | Enterprise Program-Project Management-PMO | Digital Transformation-SAS-Cloud-ERP | Process Improvement | Creates Business-Center of Excellence
1yGood article. Looks like we have more open threats than benefits at this stage
The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath
1yPamela Gupta great and timely topic. Thank you for sharing. We need all the education we can get.
Strategist • Privacy Technologist • Investor • Tech Journalist • Advocating for Data Rights & Human-Centred #AI • 100 Brilliant Women in AI Ethics • PIISA • Altitude • MyData Canada • Women in VC
1ygreat post Pamela Gupta - FYI Barry Hillier LeRoy Briggs read this and see how it is easy to create prompts to continue to perpetuate biases. The good news is, it can also be used to create the opposite effect. In the end, it is a much easier way to generate manipulative code and content.
Hessie Jones