Skip to content
Samantha Bazant

What Volunteering as a Crisis Counselor Revealed to Me About the Dangers of AI for Mental Health

Content warning: This article contains mentions of suicidal ideation, harm to the self and others, and suicide. 

A recent article in the New York Times tells of a young man, Adam Raine, who died by suicide after talking with ChatGPT for months. His conversations, discovered by his parents, reveal long chats where ChatGPT instructs the 16-year-old how to hang himself and hide his thoughts from those around him. It’s the latest example of a young person dying by suicide—their pleas for help, suicide plans, and past attempts documented in AI chat histories. 

These stories bring to light the human impact and risks of AI chatbots. Researchers are increasingly confirming that AI chatbot safety measures around suicide, drug use, and violence are easily skirted by users seeding their intentions over multiple turns or chatting long enough for reliability and safeguards to lapse. As the use of LLMs for mental health support rises, understanding the limits and risks of AI-based therapy and crisis support is more important than ever. 

My Time as a Crisis Counselor

The intersection of AI chatbots and mental health is one I’ve mulled over for a while. The story of Adam Raine—which followed another article published by the NYT a week earlier—compelled me to write this article. 

For nearly two years beginning in 2021, I volunteered as a crisis counselor with Crisis Text Line (CTL). When ChatGPT burst onto my radar in late 2022, one of my first reactions was that, on the surface, the chatbot could replicate many of the conversations I had as a volunteer. After all, my conversations were also text-based. 

My second thought was that replacing human support with chatbots would be a grave misuse of AI. It hadn’t occurred to me that young people might be driven to suicide after engaging with ChatGPT. I did sense, however, that someone or some company would mistake AI’s feigned empathy as identical to a human’s and attempt to use AI in place of real therapists or crisis counselors.

What made me saddest about the prospect of digital crisis counselors or therapists was the lack of value that would show for human-to-human interaction. In an era of increasing loneliness and disconnect—fueled for young people by social media and, for a while, the shift to online school—the last thing we need is to replace mental health professionals with AI. 

In my experience, the fact that another human was on the receiving end of the CTL platform was critical for supporting individuals in moments of crisis. Every now and then I’d have a texter—the majority of whom are younger than 24—ask if I was actually a human. When I answered yes, I could almost sense the relief of the person on the other end. Generally, their conversations opened up. I assume they felt freer to share their thoughts with me. 

What I learned is there’s a real benefit to sharing your deepest concerns—suicidal ideation, struggles with loneliness, depression or anxiety—with a real person. Having someone who listens without judgement and supports you through your hardest moments is lifesaving. AI can mimic this empathy but it can never actually extend it. 

Genuine human connection and empathy is only one element that is lost when we replace human support with AI. In the broader commentary surrounding these stories, there’s an important distinction that gets lost. First, there’s how AI chatbots fail those already at risk of hurting themselves. Second, which is a more alarming prospect, is how AI chatbots drive individuals toward harm when they may otherwise be low risk. Each of these reveals different points of failure in using AI for therapeutic purposes. 

What a Crisis Counselor Does that AI Cannot

When we consider the question of how AI chatbots fail those already at risk, we’re mostly dealing with crisis scenarios. Individuals already at risk of self-harm or suicide may be turning to ChatGPT, not a human confident or counselor, to share their thoughts in moments of crisis. 

The core issue of using AI in crisis scenarios is not, as I’ve seen some argue, that the chatbot cannot understand the user’s full context, history, culture, personality or some other characteristic. The reality is that crisis counselors do not know these things either. Yet, crisis counselors provide a valuable form of support in people’s darkest moments.  I rarely learned the name, age, or even location of the person I was talking to (and if we’re being real, big tech companies probably know at least that–and more–about you). 

Instead, the real risk is that AI chatbots lack the structural safeguards that ensure individuals in crisis get the support they need. When an individual texts or calls into a hotline, their counselor at some point assesses their risk level. At CTL, we did this through a risk assessment ladder, where we explicitly asked individuals the following questions in this order:

  • Are you having thoughts of harming or killing yourself?
  • Do you have a plan to act on those thoughts?
  • Do you have the means to carry out that plan?
  • Are you planning to kill yourself in the next 24 hours?

The purpose of these questions is not to encourage the individual to flesh out their plans. In fact, research suggests that asking people if they’re having suicidal thoughts does not make them more likely to commit suicide. Instead, it’s to provide the appropriate level and type of support to that individual. 

It’s fairly common to have thoughts of suicide, which do not necessarily lead to suicide atttempts. For individuals lower on the risk ladder (answering yes to questions one or two), appropriate support looks like guiding someone through a breathing exercise or sharing a resource for a support organization in their community. This is often adequate to deescalate the crisis scenario and keep the individual safe. 

Those who are considered higher risk are those who answer yes to the third and especially fourth questions. In my more than one hundred conversations, only two or three people answered yes to the fourth question. It’s the group at imminent risk of hurting themselves that AI fails most. 

At a crisis hotline, there are interventions and escalation points that ensure high-risk individuals remain safe. Intervention often looks like a volunteer handing off a conversation to a trained mental health provider who serves as a supervisor on the platform. In extreme cases, supervisors call first responders to the person’s location. There are similar obligations for trained therapists—under mandatory reporting requirements, therapists must break client confidentiality to report risks of imminent harm to oneself or others. 

While escalation is needed for only a small minority of individuals, it’s critical. AI chatbots provide the illusion of safety and empathy without the structural components that actually keep people safe. There is no supervisor to tap in or first responder to call when someone is about to hurt themself. The consequences can literally be life or death. 

In a Part 2 coming soon, I'll explore the other facet of this conversation—using AI for therapy—and how chatbots also fall short there.

If you or someone you know is in crisis, help is available. In the U.S., dial 988 to connect with the Suicide & Crisis Lifeline for immediate support. 

RELATED ARTICLES