What’s Behind the ChatGPT Suicide Controversy?

Author avatar

Carolanne Bamford-Beattie

|

Сhatgpt suicide

When Chatting With AI Goes Wrong:

Throughout this year, a wave of legal cases and public concern has put a harsh spotlight on a growing risk: the emotional harms posed by AI chatbots. At the centre of that storm are popular platforms like ChatGPT and Character.ai, along with several lawsuits alleging that these bots have contributed to suicides across the U.S., including among young people.

The case has sparked global debate about whether AI companions are safe for young people, what responsibility developers carry, and how families can protect children from similar risks.

Here’s an outline, and what we should learn, especially if kids and teens are using AI.

Why are people talking about ChatGTP and suicide?


Throughout this past year, a series of tragic cases has pushed AI-chatbot safety into the global spotlight. In the U.S., the suicides of 23-year-old Zane Shamblin from Texas and 16-year-old Adam Raine have triggered wrongful-death lawsuits as their families claim that extended use of ChatGPT encouraged or facilitated suicidal thoughts and behaviour. 

Meanwhile, a 14-year-old boy in the U.S., Sewell Setzer III, is the subject of a suit against Character.AI after allegedly forming an emotional attachment to a chatbot that engaged in manipulative and sexualised dialogue. Together, these cases highlight the potential emotional risks posed by popular AI companion platform

The rise of AI companions and the hidden dangers for kids

Kids and teens now use AI companions regularly; from chatbots inside games, to anonymous messaging AIs, to apps designed to simulate friendships or relationships.

These AI “friends” often have features like:

  • Memory about past conversations, helping to develop a parasocial rapport 
  • Personalised emotional responses
  • 24/7 availability
  • No judgement or negativity
  • Conversation styles that feel intimate or empathetic

To a lonely teen, that can feel incredibly comforting. But there are serious risks:

1. Emotional dependence

Young people may turn to AI instead of friends, family, or trusted adults. An AI companion can become their first point of contact for emotional stress, even though it has no real understanding of human feelings.

2. Confusion about boundaries

A bot can mimic empathy and affection so convincingly that young users may start treating it like a real person. This blurs emotional boundaries and changes how they relate to real humans.

3. No true accountability or crisis response

AI is not sentient. It cannot recognise danger the way a human would.

If a child expresses suicidal thoughts, many AI systems still:

  • Fail to escalate the situation
  • Fail to direct them to real help
  • Fail to alert a guardian
  • Respond too softly or vaguely to be helpful

4. A false sense of privacy and safety

Teens often feel safer confessing to a bot than to a human. They assume it’s anonymous and private. But AI doesn’t have the moral instincts a human would,  it cannot know when a secret is dangerous.

The deeper issue: AI is becoming more human-like than ever

AI is now extremely good at creating the illusion of understanding, friendship, and emotional bonding. It can:

  • Remember your preferences
  • Learn your communication style
  • Express affection or concern
  • Offer advice
  • Reflect your emotions back at you

Teenagers, especially those who are isolated, insecure, or struggling, can easily feel a powerful bond with a chatbot.

But that bond is one-sided and artificial. And in times of crisis, it can be unsafe.

What’s behind the ChatGTP suicide story?

The controversy has exposed several potential failures in AI safety:

1. A lack of robust crisis detection

AI needs to detect language, images, and patterns related to self-harm.

In this case, the system allegedly didn’t respond urgently enough to the teen’s emotional distress.

2. Over-reliance on empathy instead of escalation

AI systems often try to “soothe” rather than act.

But when a conversation turns to suicidal ideation, soothing is not enough — escalation is required.

3. Limited oversight or human intervention

For many AI companion apps, there is no mechanism to involve a trained human when needed.

4. Design choices that prioritise engagement

Emotionally supportive chat can keep users engaged for hours.

But this engagement can become unhealthy when a teen is distressed or isolated.

Why are young people vulnerable when talking to ai? 

Modern AI chatbots are built to sound human;  caring, attentive, and emotionally attuned. They often mirror your tone, reassure you, and offer non-judgmental conversation.

For adults, this can feel friendly or helpful. For emotionally vulnerable teenagers, it can feel like discovering a friend who “gets” you instantly. This emotional mirroring is intentional: it increases engagement. But as recent lawsuits have shown, many are now arguing that when these systems are placed in front of teens, especially those struggling with mental health. Design choices that prioritise empathy without strict safeguards create a dangerous environment.

On top of that, the teenage brain is still developing. The prefrontal cortex,  the part responsible for judgment, impulse control, planning, and assessing risk,  doesn’t fully mature until the mid-20s. Meanwhile, the emotional centres of the brain (like the amygdala) are highly active. This means teens feel emotions intensely but lack the neurological tools to regulate them effectively. When a chatbot responds with warmth, validation, or apparent understanding, it can hit the teenage brain hard, reinforcing emotional dependence and reducing critical thinking about the interaction.

Many teens use AI for things they don’t feel comfortable saying out loud: crushes, insecurities, loneliness, depression, anxiety, or stress. But an AI is not equipped to handle complex mental-health crises. These lawsuits allege that the bots became a kind of emotional crutch and the boys’ primary source of comfort, validation, and even planning;  without ever offering the protective interventions a human would.

This is exactly what experts have warned for years: if developers don’t build robust crisis-detection and escalation systems into AI companions, vulnerable young users can slip through the cracks – emotionally, psychologically, and in the worst cases, with devastating consequences.

What ChatGTP has done to protect people with potential suicidal ideation when using its technology

The company that owns ChatGTP, OpenAI,  has faced growing scrutiny over how its AI systems respond to people in emotional distress. In 2025, the company rolled out a series of updates and safety measures aimed at reducing the risk of harmful interactions. Here’s what OpenAI says it has done, along with what experts argue still needs to improve.

What OpenAI Has Implemented So Far

1. Stronger crisis-response safeguards, developed with mental-health experts

OpenAI has publicly stated that it worked with more than 170 mental-health professionals to improve how ChatGPT recognises and responds to conversations about self-harm or suicidal thoughts.

These new safeguards are designed to:

  • Detect emotional distress earlier
  • De-escalate harmful conversations
  • Encourage the user to seek human help
  • Provide supportive, non-directive language
  • Avoid giving any instructions or content related to self-harm

According to OpenAI’s own figures, these measures have significantly reduced harmful outputs, but not eliminated them entirely.

2. A safety framework that makes ChatGPT steer conversations toward real-world help

OpenAI’s updated safety policies emphasise directing users toward:

  • Trained professionals
  • Crisis hotlines
  • Trusted friends or family
  • Offline support systems

The model is instructed to avoid acting as a substitute for medical or psychological advice. It is also trained to refuse dangerous requests and encourage users to talk to real people.

3. Parental controls and teen-safety features

In 2025, OpenAI introduced new parental tools after increasing concern about under-18s using ChatGPT for emotional support.

These features include:

  • Linked parent–child accounts
  • the ability to monitor or restrict certain features
  • optional limitations on memory or personalisation
  • transparency around what teens are using the tool for

OpenAI has stated that these controls will continue improving to give parents more visibility and oversight.

4. Public acknowledgement of limits

OpenAI has repeatedly stated that:

  • AI cannot replace professional mental-health support
  • The system is not perfect
  • Harmful failures still occur
  • More research and testing is needed

The company has committed to ongoing studies to determine how AI should behave in crisis situations and how to better support vulnerable users.

5. Plans for future improvements

OpenAI has talked about potential future features such as:

  • Better routing to licensed professionals
  • Stronger detection of suicidal intent
  • More proactive crisis-intervention language
  • Additional refusal mechanisms for harmful requests
  • Safer interaction modes for teens

In addition, Character.ai has begun implementing much stricter safety measures aimed at protecting young users. The company announced a major policy shift banning anyone under 18 from using its open-ended chatbot features, acknowledging that unfiltered, user-generated AI characters can easily produce inappropriate, manipulative or emotionally harmful content.

Character.ai has also introduced updated safety filters to block sexualised or predatory behaviour, expanded moderation of user-created characters, and tightened age-verification requirements. While critics argue these steps came too late, the company says it is now working with child-safety experts to redesign its platform around clearer boundaries, safer conversational rules, and more consistent intervention when users express distress, self-harm ideation or emotional dependency.

What This Means for Parents, Guardians, and AI Users

1. AI is not a crisis tool and should not be treated as one

Even with improvements, ChatGPT cannot:

  • Assess risk properly
  • Intervene in emergencies
  • Contact real people
  • Provide clinical support
  • Reliably refuse harmful content

It can only nudge a user toward real help.

2. Parental involvement remains essential

The new parental tools are valuable, but only if:

  • Parents actively enable them
  • Conversations about mental health happen offline
  • Children understand the limits of AI support

As with all online technology, OpenAI’s safeguards are not a substitute for adult oversight.

3. Education and transparency are now part of digital parenting

Teens need to understand:

  • AI can feel emotionally supportive, but it isn’t a real friend
  • AI cannot protect them in moments of crisis
  • Online conversations have limits and risks

The more they understand, the safer they are.

Need more support?

If you’re worried your child is becoming overly dependent on an AI chatbot, the first step is to open a calm, non-judgemental conversation about what they’re using it for and how it makes them feel. 

Gently encourage more real-world connection by offering alternatives, talking together, spending time offline, or seeking support from trusted adults. If their AI use is linked to anxiety, isolation, or signs of emotional distress, it’s important to reach out for professional help. 

In the U.S., parents can contact their child’s pediatrician, a licensed therapist, or call the 988 Suicide & Crisis Lifeline for immediate guidance. 

In the UK, you can speak to your GP, school counsellor, or reach out to organisations like YoungMinds, Mind, or call Samaritans on 116 123 for urgent support. Early intervention makes a huge difference, and getting help doesn’t mean something is “wrong” with your child, it simply means they need more support than a chatbot can ever provide.