Skip to content
Samantha Bazant

What Volunteering as a Crisis Counselor Revealed to Me About the Dangers of AI for Mental Health (Part 2)

Content warning: This article contains mentions of suicidal ideation, harm to the self and others, and suicide. 

In Part 1, I discussed how AI chatbots fail individuals in moments of crisis. Not all users turn to chatbots when they’re in danger of hurting themselves. Many look to ChatGPT for everyday support with anxiety, loneliness, breakups, or school stress—topics that might otherwise be discussed in therapy sessions. 

For general mental health support, the concern is that AI chatbots may actively drive individuals to harm themselves. Essentially, could using an AI chatbot drive someone up the risk ladder? Could AI chatbots cause someone’s baseline mental health to deteriorate? Recent stories, like that of a man who got sucked into a 21-day conversation with ChatGPT and emerged believing he was a real-life superhero, suggest that it’s possible. That man is far from alone

AI Chatbots Also Make Bad Therapists

What might be causing this phenomenon, where otherwise healthy or low-risk individuals spiral into delusion or mental health crises after engaging with a chatbot? The engagement models of chatbots offer clues to how this happens.

One aspect of chatbots that leads to unhealthy attachment is the sycophantic personality of AI chatbots, especially ChatGPT. By praising and flattering its users, chatbots build an illusion of intimacy and connection. The excessive agreeableness and quickness to confirm and adapt to a user’s worldview make it a much more dangerous conversation partner than a real human, who is likely to set boundaries, push back, and provide alternative perspectives. This is especially true for trained therapists who have a professional responsibility—and years of training—to challenge harmful thought patterns and guide patients to healthier coping strategies. 

Chatbots’ sycophantic personalities contribute to a sense of attachment, but they’re not the only danger. Other elements of AI chatbots, like the 24/7 availability, erode boundaries and risk emotional dependence. Developers also employ strategies to extend the conversation, such as having the chatbot offer follow-up questions after a response. 

These design choices might sound like a good thing—people in need always have access to the chatbot for support and the tone of responses is warm—but they risk succumbing to the same harms and consequences of attention-based social media, which I previously wrote about here

The engagement model for chatbots is designed not to offer the user support, but to keep the user in conversation. This is the exact opposite goal for crisis counselors and therapists. Both crisis counselors and therapists aim to guide individuals toward healthy coping mechanisms, ensuring they have the resources and develop the skills to keep themselves safe. One measure of an effective therapist is that over time the individual’s dependence on therapy lessens. 

The result is that chatbots don’t seek to resolve a user’s concerns. They’re maximizing for engagement, not wellbeing. As users continue to engage, reliability of safety guardrails begins to weaken, opening the possibility of dangerous conversations about self-harm or suicide. A new study also suggests that models’ design to maximize engagement amplifies delusional content for those already at risk of psychosis. The result is that chatbots become riskier the longer a user engages, especially for users already predisposed to mental health concerns. 

What is to be done?

The recent stories of young people taking their lives underscore that AI chatbots are not just neutral tools but are actively designed to benefit AI companies, not users. Not only are chatbots failing those already in moments of crisis, their design may be pushing more people to that level. It’s important to remember that many of these conversations do not begin with the intent of companionship, therapy, or crisis support. The man who spiraled into a 21-day conversation began his inquiry by asking a benign math question. This makes chatbots especially dangerous for therapeutic use and suggests they are risky for any individual who may be predisposed to mental health issues. 

What should be done and who should be accountable when these systems fail real human users? In a clinical setting, therapists have a duty to protect users who may attempt suicide or self-harm. There is no such obligation for AI companies. There is likely a shared responsibility needed which may include measures such as:

  • Regulators can push to limit the use of chatbots for therapeutic purposes. The state of Illinois recently signed a law preventing the use of AI chatbots for therapy with a provision to fine AI companies $10,000 who violate it. 
  • Developers must explore viable business models that do not incentivize constant engagement. Otherwise, they risk following the same path as attention-based social media which spawned a host of negative impacts on users, especially youth. 
  • Users should be made aware of the risks, including the implications of using AI chatbots for privacy and data protection. While the full onus should not be on individuals, AI literacy in schools and built into AI applications can raise awareness. 
  • As a society, we have the responsibility to better support individuals in crisis, particularly young people, in our workplaces, schools, homes, and communities. We should seek to strengthen community and in-person connection, not outsource it to digital tools, even if that’s the easier path. 

Until the limitations of chatbots—namely failing those at high risk and creating engagement patterns that drive others toward high risk—are resolved, it’s clear that chatbots remain ill-suited for mental health support, however empathetic they may seem. 

If you or someone you know is in crisis, help is available. In the U.S., dial 988 to connect with the Suicide & Crisis Lifeline for immediate support.

RELATED ARTICLES