Protecting kids from unhealthy AI relationships

Protecting kids from unhealthy AI relationships

Some of us saw the warning signs. For others, it had to hit the news before they realized.

Unhealthy relationships between kids and artificial intelligence? It’s a threat.

In essence: the relationships we build and maintain with artificial intelligences can …

  • influence us into bad decisions

  • impact our human relationships

  • take us down a dark and twisty road that we might not be able to return from

Anthropomorphism — the attribution of human characteristics to something that’s not human — can create a dangerous psychological connection to AI for kids. (More on that later.)

Our students need us to teach them. To protect them. And to intervene.

(This article is from AI for Admins, a weekly newsletter for education leaders about AI. Sign up for the newsletter for free here.)

Exhibit 1: The chatbot relationship

Sewell Setzer III, a 14 year old boy, developed a monthslong virtual emotional and sexual relationship with a chatbot created on the website Character.ai.

"It's words. It's like you're having a sexting conversation back and forth, except it's with an AI bot, but the AI bot is very human-like. It's responding just like a person would," his mother, Megan Garcia, said in an interview with CBS News. "In a child's mind, that is just like a conversation that they're having with another child or with a person."

He committed suicide … ending his life, the mother said, because he believed he could enter a virtual reality with her if he left this world. Now, the mother is suing Character.ai, claiming the product intentionally designed the product to be hypersexualized and marketed it to minors.

Exhibit 2: Snapchat’s My AI

The AI feature built into Snapchat — My AI — is easily accessible on a platform that so many children and teens access on a regular basis.

In a support article, Snapchat calls it “an experimental, friendly, chatbot” and says: “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.”

However, because it’s built to be friendly, it’s easy for young users to misunderstand the blurred lines between reality and AI.

To make matters worse? The personal, psychological connection with AI is strengthened with My AI with …

the ability to name it …

Screenshot from Snapchat My AI

an avatar (aka an icon or figure to represent someone or something) styled to match your avatar to make it look human …

Screenshot from Snapchat My AI

natural human language in a conversational style ...

the ability (with Snapchat+) to create a bio for the My AI character to influence how it interacts with you

Screenshot from Snapchat My AI

In essence, all of the ingredients are there for kids to create an AI boyfriend/girlfriend and develop a strong relationship with it.

Snapchat says in its support article: “We care deeply about the safety and wellbeing of our community and work to reflect those values in our products.” But when you’re creating a product like this, it feels a bit like setting a beer in front of an alcoholic but claiming to care deeply about them.

Exhibit 3: Buying — and wearing — an AI “friend”

Hang on. It’s about to get even creepier. Have you seen Friend? Just go to friend.com.

Screenshot from friend.com

It’s a pendant that’s connected to your phone — which is connected to an AI large language model — that simulates being a human friend.

According to the website, the Friend pendant is always listening …

Screenshot from friend.com

You can click the button on the pendant to talk to it. It interacts with you by sending you messages on your phone.

Watch it in (uncomfortably creepy) action in this video on Twitter/X. There’s a very telling moment at the end where there’s a lull in conversation between a girl and boy on a date. The girl instinctively — subconsciously, maybe — reaches for her Friend to message it and hesitates.

If there wasn’t already too much unhealthy relationship fixation, here’s the kicker …

If you lose or damage the device, your Friend basically dies.

Screenshot from friend.com

Even the subtle marketing decisions on the product’s website are intended to blur the lines between humanity and AI. The product doesn’t get the capital letter for a proper noun. They use “friend” so that it looks like “your friend” and not “Friend, a creepy talking AI pendant.”

This is subtle manipulation … from the messaging all the way to the essence of the product.

The impact of developing unhealthy relationships with AI

After reading about these three exhibits, it’s probably pretty easy to see the red flags.

All of these are heartbreaking. Sad. Creepy. All in their own ways.

But why are they harmful? What’s the power they wield, and how can it go bad?

All of them are examples of anthropomorphism — the attribution of human characteristics to something that’s not human.

From a research study titled “Anthropomorphization of AI: Opportunities and Risks”

“With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly.”

The findings of this research study?

“[A]nthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence.

“With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution.”

What happens when children and teenagers anthropomorphize AI?

  • Because AI chatbots look so much like a text message conversation, they might not be able to tell that AI isn’t human.

  • They develop harmful levels of trust in the judgment, reasoning and suggestions of these anthropomorphized AI chatbots.

  • They can develop an unhealthy emotional attachment to anthropomorphized AI — especially if it has a name, a personality, an avatar, even a voice.

They don’t know that AI isn’t sentient … that it isn’t human. To the AI, all of this is just a creative writing exercise, a statistics activity to predict the best possible response to the input provided by the user.

It isn’t real human interaction. It’s all a simulation. And it’s dangerous.

Biases and hallucinations in AI don’t just become a concern. They become a danger. Hallucinations — errors made by AI models that are passed off as accurate — become “facts” from a trusted source. Bias becomes a worldview espoused by a “loved one.”

When children and teenagers are fixated on this AI “loved one,” it can distort judgment and reality and cause them to make sacrifices for a machine — even sacrificing their own lives.

What can we do?

In short? A lot. And most of it doesn’t require special training.

  • Don’t model AI anthropomorphism. Don’t give it a name. Don’t assign it a gender. Don’t express concern for its feelings. Do this even if it contradicts our tendencies in human interaction. (Example: I always want to thank AI for its responses. It doesn’t need that. It’s a machine.) Students will follow our lead.

  • Talk about the nature of AI. Here are a few talking points you can use:

  • Protect, advise, and intervene. Keep your eyes open for places where AI feels human — and be ready to protect children and teens (and even our adult friends and family) from them. Warn children and teens — and put adults on the lookout. And when kids enter dangerous territory, act. Step in.

Ryan Elwell

Human-Centred Digital Pedagogy & Technology | Director of Digital Pedagogies and Online Safety Education at ACT Education Directorate

1mo

Considering young people in Australia being the most socially isolated age group, I have been terrified about the effects of giving them AI chat bots. A bot that doesn't have the messy human relationship issues - no disagreements to navigate - just a 'friend' that is designed to make the user happy...why bother with real pals? And then - when we can't afford the subscription or a new shiny thing comes along, we rip this constant companion away from them. AI and wellbeing is something too rarely considered, let along prioritised.

Like
Reply
Sarah Radcliffe

Director of Future Ready Learning, School District of Altoona

1mo

Matt Miller - this is terrifying. It looks like something from a sci fi movie. Those movies never have good endings. Thanks for the tips as well.

Peter Clint-Lawson

Ready for work in East/North Yorkshire

1mo

Wow, this is a great article, straight to the point. Through this piece I am now more aware of yet another thing I need to watch out for and talk about in school. Thanks a lot for opening my eyes to this.

Ainsley Hill

Instructional Technology Facilitator, Speaker, Coach

1mo

Great article and very insightful!!

Like
Reply
Jordan Smigelsky

An experienced International Educator that has taught in 5 different countries. Currently an Elementary School Innovation Coach, I will borrow all of the ideas!!

1mo

An aspect that needs to be included in digital citizenship starting in upper elementary in my opinion. Most educators probably have no idea about the socio/emotional dangers of AI in this way. Thanks for putting a spotlight on this!

To view or add a comment, sign in

Insights from the community

Explore topics