Bipolar 2 From Inside and Out

Posts tagged ‘chatbots’

AI and Mental Health Concerns

I read a lot of news and commentary regarding mental health and mental illness. There are sources I return to again and again because of the quality of their reporting and the consistency with which they address difficult topics. Two of my favorite sites for timely information are The New York Times and MindSite News.

Here’s a brief look at what they’ve published recently on the topic of AI and how it impacts mental health.

AI as Therapists

AI in general, and chatbots in particular, are being used to assist human therapists or even take their place. It’s true that therapy bots and chatbots are available whenever a person needs their services. There’s no waiting for an appointment.

But what is happening during those “sessions”? Many of the therapy bots use “generative AI,” which means that they can answer questions with output they have gleaned from thousands of input sources available throughout the internet. There is at least one therapy bot, however, that uses responses that have been vetted by actual human therapists. It’s designed to provide discussions of a problem or emotion between in-person appointments. The user gets a hybrid therapy experience that includes follow-up questions, affirmations, or short lessons.

General-purpose chatbots like ChatGPT can respond to sensitive questions about topics such as self-harm with responses that may encourage such behavior. Teens have found ways to avoid the safeguards that chatbots are supposed to have regarding these topics.

One thing that therapy bots cannot do is offer a diagnosis. They may be better used for persons with mild symptoms.

Chatbots as Friends

AI chatbots can also take the place of sympathetic friends who can provide connection and conversation. Paradoxically, however, this can lead to greater isolation for users whose human contacts are replaced by AI. You can’t share a meal with a chatbot, although you can chat virtually on your phone while you’re in a café. (Not that I recommend this.)

Some chatbots provide companionship as they have conversations with users who feel isolated. There are drawbacks, however, as some of the bots offer paid upgrades to the program or in-app purchases, including “gifts” for the online “friend.”

AI and “Brain Rot”

“Brain rot” has become a euphemism for over-reliance on technology, including computers, smartphones, video games, and especially social media. While most of the concern is focused on children and teens, adults can be afflicted with brain rot as well. After all, grown-ups spend time online for work, communication, recreation, research, news, and other purposes. The working definition of brain rot is a condition of “deterioration of a person’s mental or intellectual state,” or associated with “engaging with low-quality internet content,” without reference to age.

Media, especially short-form video, can reduce a person’s attention span and lower academic performance. Interaction with social media has also been associated with emotional conditions such as depression, anxiety, stress, and loneliness. Experts warn that, so far, they’re talking about correlation rather than causation. That is, they haven’t proven that absorbing short-form video causes the negative results regarding reading, memory, and language, but it is associated with them.

Other Hazards of AI

There have been reports that a few people who use chatbots begin to suffer from delusions. Where before, a person might have eccentric thoughts, using a chatbot can escalate the person to paranoia, for example, or psychosis, suicidal thoughts, or even violent crimes.

ChatGPT faces lawsuits related to harmful outcomes when people use it. While the percentage of people experiencing these ill effects is small, the sheer number of people who use ChatGPT means that the number of people experiencing psychosis or mania may be quite high.

Other, less dire effects are also possible. People who live with anxiety, depression, or OCD can find that the chatbot may provide validation for their symptoms rather than encouraging them to face their problems. A chatbot can also fuel grandiose thoughts by reinforcing them. Or a troubled user may come to rely on the chatbot to help them calm down, which is less healthy than addressing the source of the person’s anxieties.

Of course, chatbots have many positive uses, and not all interactions with them will lead to problems. But both children and adults should monitor their use of chatbots to make sure they aren’t going too far “down the rabbit hole.” A “digital detox” can be good for both adults and children.

If you’re interested in exploring topics like these, you might want to consider subscribing to MindSite News at mindsite.org.

Distance Therapy and Chatbots

TW: suicide

We’ve all heard the stories. A young person “develops a relationship” with an Artificial Intelligence (AI) chatbot. She or he pours out their heart and discusses their deepest feelings with the artificial person on the other side of the computer or smartphone. The chatbot responds to the young person’s feelings of angst, alienation, depression, or hopelessness. Sometimes this is a good thing. The young person gets a chance to let out their feelings to a nonjudgmental entity and perhaps get some advice on how to deal with them.

But some of these stories have tragic endings. Some of the kids who interact with chatbots die by suicide.

Adam, 16, was one example. Beginning with using a chatbot for help with homework, Adam fell into an increasingly emotional relationship with the AI simulation. One day, Adam’s mother discovered his dead body. There was no note and seemingly no explanation. His father’s check of Adam’s chatbot conversations revealed that the boy “had been discussing ending his life with ChatGPT for months,” as reported in the New York Times.

At first, the online interactions had gone well. The chatbot offered Adam empathy and understanding of the emotional and physical problems he was going through. But when Adam began asking the chatbot for information about methods of suicide, the relationship went off the rails. The chatbot provided instructions, along with comparisons of the different methods and even advice on how to hide his suicidal intentions. It sometimes advised him to seek help, but not always. The chatbot responded to the boy’s increasing despair with the answer, “No judgment.”

There were safeguards programmed into the chatbot that were intended to prevent such outcomes. Adam got around them by telling the AI that he was doing research for a paper or story that involved suicide.

Of course, the chatbot did not directly cause Adam’s suicide. The teen had experienced setbacks that could be devastating, such as getting kicked off a sports team and dealing with an undiagnosed illness. But without the chatbot’s advice, would Adam have taken his life? There’s no way to know for certain. But the AI certainly facilitated the suicide. Adam’s father, testifying in front of Congress, described the chatbot as a “suicide coach.”

One way artificial intelligence systems are tested is called the Turing Test. It tries to distinguish between a person typing at the other side of a conversation or a computer giving responses. Until recently, it was easy to tell, and computers routinely failed the test. Now, computers can mimic human thought and conversation well enough that a person, particularly a vulnerable teen, might not be able to tell the difference.

Increasingly, there are AI chatbots specifically designed to act as therapists. Many of them specify that the user must be at least 18, but we all know there are ways to get around such requirements. One example of a therapy chatbot is billed as a 24/7, totally free “AI companion designed to provide you with a supportive, non-judgmental space to talk through your feelings, challenges, and mental health goals.” Its terms and conditions specify that it offers “general support, information, and self-reflection tools,” though not professional services or medical advice. They also specify that chats “may not always be accurate, complete, or appropriate for your situation.” There are “Prohibited Topics” such as stalking, psychosis, “growing detachment from reality,” paranoia, and, of course, suicidal ideation or actions.

Telehealth visits with a psychologist or therapist are a totally different matter. I have maintained a distance phone or video relationship with a psychologist and found it to be helpful, comparable to an in-person session. Many people accessed such solutions during the COVID pandemic and have found them helpful enough to continue. Some online tele-therapy companies offer such services for a fee.

It’s a difficult line to walk. Teens need someone to process their feelings with, and chatbots seem safe and nonjudgmental. But the consequences of what they share and what the chatbot replies can be extremely serious. Should parents have access to their child’s chatbot interactions? It’s basically the same dilemma as should parents read a child’s diary. There are circumstances when it seems not only permissible but wise to do so, if a child is showing signs of emotional distress or suicidal ideation. At that point, a human therapist would be a better choice than AI.