Bipolar 2 From Inside and Out

TW: suicide

We’ve all heard the stories. A young person “develops a relationship” with an Artificial Intelligence (AI) chatbot. She or he pours out their heart and discusses their deepest feelings with the artificial person on the other side of the computer or smartphone. The chatbot responds to the young person’s feelings of angst, alienation, depression, or hopelessness. Sometimes this is a good thing. The young person gets a chance to let out their feelings to a nonjudgmental entity and perhaps get some advice on how to deal with them.

But some of these stories have tragic endings. Some of the kids who interact with chatbots die by suicide.

Adam, 16, was one example. Beginning with using a chatbot for help with homework, Adam fell into an increasingly emotional relationship with the AI simulation. One day, Adam’s mother discovered his dead body. There was no note and seemingly no explanation. His father’s check of Adam’s chatbot conversations revealed that the boy “had been discussing ending his life with ChatGPT for months,” as reported in the New York Times.

At first, the online interactions had gone well. The chatbot offered Adam empathy and understanding of the emotional and physical problems he was going through. But when Adam began asking the chatbot for information about methods of suicide, the relationship went off the rails. The chatbot provided instructions, along with comparisons of the different methods and even advice on how to hide his suicidal intentions. It sometimes advised him to seek help, but not always. The chatbot responded to the boy’s increasing despair with the answer, “No judgment.”

There were safeguards programmed into the chatbot that were intended to prevent such outcomes. Adam got around them by telling the AI that he was doing research for a paper or story that involved suicide.

Of course, the chatbot did not directly cause Adam’s suicide. The teen had experienced setbacks that could be devastating, such as getting kicked off a sports team and dealing with an undiagnosed illness. But without the chatbot’s advice, would Adam have taken his life? There’s no way to know for certain. But the AI certainly facilitated the suicide. Adam’s father, testifying in front of Congress, described the chatbot as a “suicide coach.”

One way artificial intelligence systems are tested is called the Turing Test. It tries to distinguish between a person typing at the other side of a conversation or a computer giving responses. Until recently, it was easy to tell, and computers routinely failed the test. Now, computers can mimic human thought and conversation well enough that a person, particularly a vulnerable teen, might not be able to tell the difference.

Increasingly, there are AI chatbots specifically designed to act as therapists. Many of them specify that the user must be at least 18, but we all know there are ways to get around such requirements. One example of a therapy chatbot is billed as a 24/7, totally free “AI companion designed to provide you with a supportive, non-judgmental space to talk through your feelings, challenges, and mental health goals.” Its terms and conditions specify that it offers “general support, information, and self-reflection tools,” though not professional services or medical advice. They also specify that chats “may not always be accurate, complete, or appropriate for your situation.” There are “Prohibited Topics” such as stalking, psychosis, “growing detachment from reality,” paranoia, and, of course, suicidal ideation or actions.

Telehealth visits with a psychologist or therapist are a totally different matter. I have maintained a distance phone or video relationship with a psychologist and found it to be helpful, comparable to an in-person session. Many people accessed such solutions during the COVID pandemic and have found them helpful enough to continue. Some online tele-therapy companies offer such services for a fee.

It’s a difficult line to walk. Teens need someone to process their feelings with, and chatbots seem safe and nonjudgmental. But the consequences of what they share and what the chatbot replies can be extremely serious. Should parents have access to their child’s chatbot interactions? It’s basically the same dilemma as should parents read a child’s diary. There are circumstances when it seems not only permissible but wise to do so, if a child is showing signs of emotional distress or suicidal ideation. At that point, a human therapist would be a better choice than AI.

Comments always welcome!

Discover more from Bipolar Me

Subscribe now to keep reading and get access to the full archive.

Continue reading