AI Hallucinations & Privacy: A Reputational Harm Nightmare
Continuing the previous discussions on the privacy consequences of AI applications, today I will talk about how conversational AI tools such as ChatGPT can affect privacy by causing reputational harm.
Reputational harms, in the privacy context, according to Profs. Solove & Citron:
"impair a person’s ability to maintain 'personal esteem in the eyes of others' and can taint a person’s image. They can result in lost business, employment, or social rejection."
If a journalist publishes malicious lies about you in a newspaper, there will be reputational harm; if a chatbot, when prompted to answer "who is Luiza Jarovsky," writes up fantasized information about me, there can potentially be reputational harm, depending on the content that is output by the chatbot.
Isn't everybody saying that these AI-based chatbots are great productivity tools that will transform the way we work and live? Can they "invent" information?
Yes, they can.
AI hallucination is a term used to refer to cases when an AI tool gives an answer that is known by humans to be false. According to an article by Satyen Bordoloi,
"AI hallucinations occur in various forms and can be visual, auditory or other sensory experiences and be caused by a variety of factors like errors in the data used to train the system or wrong classification and labelling of the data, errors in its programming, inadequate training or the systems inability to correctly interpret the information it is receiving or the output it is being asked to give."
When dealing with AI-based chatbots like ChatGPT, these hallucinations happen every day to millions of users that are using it.
In an interview to Datanami, Peter Relan, co-founder of Got It AI, a company that develops AI solutions, said: "the hallucination rate for ChatGPT is 15% to 20% (...) so 80% of the time, it does well, and 20% of the time, it makes up stuff.”
One of Relan's company's AI products is a "truth-checker," a tool trained to detect when ChatGPT (or other large language models) are hallucinating. He said that his truth checker is 90% accurate. So if we do the maths, 2% of what AI based-chatbots say will be hallucinations and will not be detected by this truth-checker.
According to Relan, OpenAI (the company behind ChatGPT) and other AI developers can make efforts to reduce the hallucination rate, but "the hallucination problem will never fully go away with conversational AI systems."
If we are talking about privacy and reputational harm, the hallucinations that matter are those that affect individuals.
In her article for MIT Technology Review, Melissa Heikkilä, mentions the case involving BlenderBot (Meta's chatbot demo for research purposes) and Maria Renske “Marietje” Schaake. She is a Dutch politician, a former member of the European Parliament, and now the international policy director at Stanford University’s Cyber Policy Center and an international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. BlenderBot said that she was a terrorist, directly accusing her without prompting. According to Melissa Heikkilä, the probable origin of this hallucination was an op-ed Schaake wrote to the Washington Post where the words “terrorism” or “terror” appeared three times.
Another example is what happened to Diogo Cortiz, a cognitive scientist and futurist. He recounts that he asked ChatGPT to tell the names of the books written by his Ph.D. supervisor, Prof. Lúcia Santaella, a semioticist well-known in her field. The chatbot answered with a list containing five titles. He found it strange, as he knew she had written more than 40 books. After checking, he realized that none of the books listed existed. All of them had words common to the professor's field (and maybe she could have written those books), but they were hallucinations.
Others have been posting examples of hallucinations online, and if the hallucination rate is indeed 15-20%, they are happening all the time, millions of them daily.
When talking specifically about personal data and reputational harm: people - especially journalists, public speakers, and anyone with a strong online presence - are having lies/distortions about their lives being output by AI chatbots. And they have no idea in what contexts this information is being output, or how to delete or edit this false information about them. Core data protection rights are the right to erasure and the right to rectification. How are these rights being applied in the context of AI chatbots?
When we use search to look for a person online, we have third-party sources to choose from, linked by the search engine. And we can be critical and select the source we consider legitimate, we can compare sources. If people start using conversational AI systems for everything, including fact-finding, the chatbot has only one answer: "the truth," as if we were worshiping an oracle.
In this context, AI chatbots' potential for reputational harm is immense. Nearly 1/5 of the prompts' answers, stories, bios, and facts about people will be hallucinations. What I know is that there will be plenty of work for lawyers.
💡 I would love to hear your opinion. I will share this article on Twitter and on LinkedIn, you are welcome to join the discussion there or send me a private message.
-
🎓 Our specialized privacy courses
- April cohort: Privacy-Enhancing Design: The Anti-Dark Patterns Framework (4 weeks, 1 session per week). Register now using the coupon TPW-10-OFF and get 10% off.
- May cohort: Privacy & AI: Regulation, Challenges, and Perspectives. Join the waitlist.
- June cohort: Privacy-Aware Parenting. Join the waitlist.
To learn more, visit: implementprivacy.com/courses
-
🔁 Trending on social media
Privacy & AI Intersections. See the full thread here.
-
📢 Privacy solutions for businesses (sponsored)
Discover Yes We Trust, the privacy hub to stay up to date on industry news, gain insight from experts and connect with other privacy-minded professionals. Align your privacy strategy with your company business goals by attending our webinars and in-person events and leveraging our blog and private Linkedin group.
Watch the replay of the first Yes We Trust webinar! We gathered several data privacy experts, including Luiza Jarovsky and Stéphane Hamel, to discuss the state of data privacy in 2023. They shared feedback and thoughts around Privacy & UX, Analytics & Compliance and the GDPR as a global standard. This webinar is a must if you’re looking for the main privacy topics to watch for in 2023. Watch it on-demand here!
-
📌 Privacy & data protection careers
We have gathered relevant links from large job search platforms and additional privacy jobs-related info on this Privacy Careers page. We suggest you bookmark it and check it periodically for new openings. Wishing you the best of luck!
-
✅ Before you go:
- Did you enjoy this article? Share it with your network so they can subscribe to The Privacy Whisperer.
- For more privacy-related content, check out The Privacy Whisperer Podcast and my Twitter, LinkedIn & YouTube accounts.
- At Implement Privacy, I offer specialized privacy courses to help you advance your career. I invite you to check them out and get in touch if you have any questions.
See you next week. All the best, Luiza Jarovsky
Web Designer, UX Designer, Σχεδιαστής Ιστοσελίδων, Επεξεργαστής Εικόνας
1yHallucinating!
Helping mid-sized organizations increase sales and improve customer service since 1993 | #LinkedInLocal
1y20% hallucination rate Luiza? Based on what I've seen recently, that makes total sense to me! I mentioned in a comment on an article about issues with ChatGPT for marketing (by Joe Lazer (Lazauskas)) that ChatGPT is the Gartner Hype Cycle ... on steroids! In less than 3 months, we've gone from the "Innovation Trigger" to the "Peak of Inflated Expectations" ... and, now that some are pointing out the issues, we're already heading into the "Trough of Disillusionment"! 1. Innovation Trigger 2. Peak of Inflated Expectations 3. Trough of Disillusionment 4. Slope of Enlightenment 5. Plateau of Productivity https://www.gartner.com/en/research/methodologies/gartner-hype-cycle