Bipolar 2 From Inside and Out

Posts tagged ‘public perception’

Murder and Mental Illness

Murder is associated in the public mind with brain illnesses, particularly schizophrenia, bipolar disorder, and PTSD. David Hogg, anti-gun activist and mass shooting survivor has a lot to say about the topic: “If you believe it’s mental illness, call your reps and ask that they fund mental h[ealth] programs in our schools and communities. I don’t agree it’s mental illness that causes these shootings But we do need more funding for mental health programs to reduce the growing rate of suicide.”

Hogg has said that systemic poverty, race, and hatred are bigger motivators of mass shootings than mental illness. He also notes, “I do think it’s important to note the shooter at my high school had tons of mental health stuff. From my understanding, … there were school psychologists, there were therapists, there were all these different things involved. And I don’t think one more therapist would have made the difference for him. We need to put our politics aside, and get something done.”

The assassination attempt on former President Trump has stirred up the debate again. The assumption that mental illness is the cause of public acts of violence persists. The usual suspects include bullying, psychotropic medication, and social isolation. There have also been a lot of conspiracy theories and blame tossed around. It was Democrats. It was a “false flag” operation. It was staged. It was a foreign plot. The injury was minor. The injury almost took his life. (There may well be more I haven’t heard.)

I fully expect the mental illness hue and cry to start. In fact, it’s already begun. There have been reports that Thomas Crooks sought information on major depressive disorder and was bullied at school. (He was 20 years old when he fired at Trump. Apparently, he committed no violence while at school.) I stress that these are not facts. They have only been reported in the media and tempered by the term “allegedly.”

Personally, I don’t accept such reports at face value. Media reports in the aftermath of a shooting have so often turned out to be unwarranted, misguided, or premature. I prefer to wait for more reliable, less heated reporting that comes from official sources who have actual knowledge of the situation.

I will say that major depressive disorder is a disorder that leads to violence against oneself rather than others. Even if the Crooks did have it (not proven), it seems unlikely that it was a factor in the incident. Depression more often results in suicidal ideation or attempted or completed suicide than in homicide. That he might have been seeking “suicide by cop” is even more unsupported so far and probably unknowable.

It may be true that Crooks had a mental illness, but we don’t know that yet—if we ever will—and there are other possible explanations for his actions, including garden-variety hatred, violent extremism, and political motivation.

What I do think we know is that mental illness will once again be assumed to be the cause by both the public at large and the media. They may even find some psychological “experts” who never met Crooks and never treated him to expound on his diagnosis or motivation in media interviews. That’s usually the course these things follow. Lilliana Mason, a political scientist at Johns Hopkins University, said today, “It sounds like he was relatively isolated and troubled, sad and looking for attention.”

I also firmly believe that this incident will make no difference whatsoever in the debate on gun control. And if mental illness is the cause, it will be acknowledged as a Bad Thing but will not result in any initiative that would provide funding for better care of those with SMI. A massive tut-tut and a hearty shrug are about all I expect.

I’d love to be proved wrong.

Breach of Confidentiality

One of the things that people who see a therapist dread is a breach of confidentiality. Fortunately, it almost never happens. Therapists have client-therapist confidentiality that forbids it. It’s like the seal of the confessional for priests.

There’s an exception, however, and that’s the Tarasoff warning. Here’s how it came to be.

Way back in 1969, a young woman named Tatiana Tarasoff, a student at the University of California, Berkeley, was murdered. The killer was Prosenjit Poddar, also a student at the university. They knew each other and had gone on several dates. Unfortunately, as happens way too often, the couple had differing opinions on where the relationship should go.

Poddar became obsessed with Tarasoff. She was no longer interested. So he began stalking her. He had an emotional crisis and began seeing a therapist at the university medical center.

So far, it’s a pretty typical story of a relationship, a breakup, and an extreme emotional reaction. However, it soon became much more than that.

One day, Poddar admitted to his therapist that he wanted to kill Tarasoff. (He didn’t refer to her by name, but her identity was clear). The therapist said that, if Poddar kept issuing death threats, he would have to be hospitalized. Poddar stopped coming to therapy.

The therapist was left with a dilemma as to what he should do next.

The therapist and his supervisor decided to write a letter to campus police regarding the death threats. The police interviewed Poddar in the room he shared with Tarasoff’s brother. When Poddar denied everything and said he would stay away from Tatiana, the investigation was halted. The supervisor instructed the therapist to destroy all his case notes.

Of course, Poddar continued stalking Tarasoff and confronted her. When she tried to run away, he stabbed her with the knife he was carrying, killing her. He was arrested, tried, and convicted of first-degree murder (though he had tried to plea-bargain down to manslaughter). He served five years and was deported to his native India.

Tarasoff’s parents sued the university and the therapists on the grounds that they should have warned their daughter about the death threats. The therapists countered with the client-therapist confidentiality argument and won. Later, however, the case was retried and this time, in 1976, the Tarasoffs prevailed.

Since then, over half of US states have enacted “Tarasoff laws.” Others leave the decision up to the therapist. And Maine, North Carolina, North Dakota, and Nevada have specifically ruled that Tarasoff laws don’t apply there. The laws are still controversial. For one thing, the university therapists did call the campus police. For another, it leaves the burden of deciding whether a threat is credible solely on the therapist. And it left it up to the therapist whether to breach confidentiality. And there have been debates on whether Tarasoff warnings should be given regarding threats of physical violence that fall short of murder.

So, what’s a therapist to do? Warn clients that if they make threats, they’ll be reported? That can have the effect of causing the client to leave therapy. Guess—and it really is a guess—whether a threat is real or perhaps a fantasy? Err on the side of caution? Give priority to the confidentiality requirement? Risk a malpractice lawsuit brought by the client if the therapist does report the potential threat? A wrongful death suit for not acting in time?

Which prevails: the duty to warn a potential victim or the duty to preserve confidentiality? And is it a duty to warn or a duty to protect? (These distinctions have been made in some places.) We’ve become used to the phrase “harm to self or others” when it comes to involuntary treatment. But this question goes further. What does a therapist owe to a specific individual who may be killed? Sectioning the client? Reporting the threat to the police? Directly warning the potential victim?

It’s an awful lot to place on the shoulders of a therapist: determine the reality of a threat, make a prediction about future violent behavior, and determine an appropriate response. Weighing patient confidentiality and harm or death to another is a huge burden. But in the interests of there never being another Tarasoff-style murder, I’m coming down on the side of the duty to warn.

Self-Care and Social Care

We hear a lot about self-care these days. Much of the mainstream media seems to think that it means “shopping therapy,” indulgent desserts, spa days, and mani-pedis. Expensive things. Ones that you need to be able to leave the house to do. (Except for online shopping, of course.)

Businesses are also quick to suggest self-care for their workers who are experiencing stress. What they mean by self-care is to take up yoga or meditation—on your own time and your dime.

Real self-care may include yoga and meditation and even the judicious use of ice cream, but it’s much more than that, of course. Self-care begins with the things that we all know are good for both body and mind—exercise, healthful food, good sleep, and stress reduction. Other good habits often mentioned are a digital detox, mindfulness, journaling, gratitude, affirmations, prayer, fresh air and sunshine, and hobbies.

Those are good things, of course, but they are primarily solo things, or at least were while the pandemic had us cooped up. Now we can get out and about more easily, go jogging or hiking with a friend, invite people over for dinner, and generally add human companionship to our list of self-care techniques.

But maybe what we need is social care (also known as community care). It’s hard to define social care. One source I looked for mentioned advocacy. But that’s pretty much something we have to do for ourselves. There are organizations like NAMI, and they do a great job at advocacy, but there’s only so much they can do. There aren’t brain illness support groups the way there are for alcoholism, narcotics addiction, and other kinds of afflictions that require outside support. There aren’t Meals on Wheels-type services for people who can’t leave their homes because of crippling anxiety. (Of course, grocery stores deliver now, but it gets expensive.)

There are very few group homes for people with SMI who need to transition between the hospital and living alone. There are group homes (sober houses) for those with alcohol or other addictions and even prisoners on parole. Many people with psychological or psychiatric needs rely on family members as long as they are able. I know a woman who lives with her father because of her assorted diagnoses. We’re all worried about what will happen to her when her father, who’s not in good health, dies. Her mother, when she was alive, tried to get her into a group home, with no success.

For those who are able to leave the house on occasion, social networks are recommended as a form of self-care. And even for people who can’t go out, there are social media, email, and video chats, which can fill some of the gaps.

But social services are thin on the ground, at least near me. I live in a suburb near two medium-sized cities. Most of the services available are for the mentally disabled, physically disabled, seniors, and some respite care for caregivers. People with SMI get a list of the crisis numbers. And, of course, psychiatric beds are limited and even psychiatrists have months-long waiting lists.

I know funds are limited and that the other groups need care, too. But social care is needed for those with SMI, too. We’re dependent on tax dollars, which are hard to come by.

It’s worth noting that the National Health Service in the UK has many more programs accessible to those with SMI at little or no cost. Of course, those are functions of socialized medicine, which is not likely to be enacted in the US anytime soon.

Grippy Socks and Sour Candy

My husband is a great help when I write my blogs. He keeps an eye out for news stories that deal with mental health in some fashion. So when he saw an article on new words related to the topic, he made sure I saw it. Then he asked me how I felt about it.

The story was about new language that young people were using to describe various mental health concepts.

First and foremost among them was “grippy socks vacay”—a reference to the footwear issued to people who have been committed, voluntarily or otherwise, to psych wards. But “vacay” is short for “vacation.” I can just picture a conversation using it: “Where’s Janet been?” “Oh, she’s been on a grippy socks vacay.” Or “I’m stressed. It’s time I went on a grippy socks vacay.” It seems unlikely that the people who say these things are always referring to an actual stay in a psych ward.

I was more than slightly appalled. It’s true that grippy socks evoke the image of a hospital stay. But grippy socks are a part of any stay in any department of a hospital, not just psych wards. And such a stay is hardly a vacation. It’s likely, I think, that people use this to mean something like “relaxing getaway” or “time to clear my head.” An actual stay in a psych ward, however, is not a relaxing getaway. It’s intense. It’s not supposed to be relaxing. And while it does provide time to clear one’s head, that’s still far from accurate. Medication, group therapy, and individual therapy may eventually clear one’s head or at least change one’s perspective, but it’s hardly just a time away from work and day-to-day stresses.

The article went on to discuss whether the phrase increased or decreased stigma. Some said one, some the other. I think it perpetuates stigma. It implies that someone who is in a psych ward is there to have a good time. “Grippy socks vacay” is demeaning when the hard work that mental patients must accomplish is considered.

If it’s used as a euphemism for an actual psych ward stay, it’s insensitive at the least. If it just means time off from daily cares, it’s still inaccurate and discounts the real experience. Those things can’t be good for reducing stigma.

Now, my friends and I have been known to use irreverent language to refer to our conditions. Robbin and I used to say on occasion that we needed a “check-up from the neck up.” We used it just between the two of us (well, I’ve also used it with my husband) to indicate that we needed to see our therapists. But I don’t see it as being demeaning, especially since we never used it in the context of anything but our own disorders, not a general description of someone the general populace would slangily describe as “crazy.” If we had said of any popular figure that they needed a check-up from the neck up, that would have been something else. But we didn’t.

Of course, you may disagree with this and I’d love to hear from you regarding your opinion.

The other article my husband shared with me was one that indicated that it was a trend on TikTok to use sour candy to ward off anxiety. The article even said that experts backed up the theory.

The idea is that the intense sensation of sourness distracts the brain from the cause of the anxiety. It’s a distraction technique, like snapping a rubber band on your wrist to take your mind away from unwanted thoughts. One expert interviewed for the article said, “Panic ensues when our amygdala triggers the flight or fight response. One way to dampen our amygdala’s response and mitigate panic is by turning our attention to the present moment through our senses: taste, smell, touch, sight, and hearing.” Mindfulness through candy, I guess, would be a way to describe it. The experts also advise grounding yourself with other sensations such as the scent of essential oils.

Other experts noted that sour candy is a kind of crutch and not a long-term solution. One called it “maladaptive.” Sensory distractions, they said, were most effective in conjunction with acceptance rather than avoidance.

What’s the takeaway from this? Aside from the potential boost in sales for Jolly Ranchers, I mean. I think it’s a good reminder that there are ways to short-circuit anxiety and panic. And for people who only experience occasional, momentary anxiety, it’s probably a good thing. But for someone with an actual anxiety or panic disorder, it’s likely to be only one tool they use — and a minor one, at that.

What have you been reading recently about mental health trends? I’d love for you to share that, too.

Codependency: Fact or Fiction?

Lately, I’ve been seeing articles with titles that say codependency is a myth or a hoax. They claim that the concept is not just wrong but harmful. Despite its almost 40-year history, codependency now seems invalid to many.

Codependency is defined as a mechanism whereby enablers are enmeshed with their child, spouse, sibling, or significant other to such an extent that they lose the ability to take care of their own emotional needs. The enabling also means that the person suffering from a psychological condition (originally addiction, but later other problems) does not have the motivation to work on themselves or change their behavior. In extreme cases, it means that one partner cannot tell where they end and the other begins.

My husband introduced me to the concept of codependency. He has a background in psychology and was greatly influenced by Melody Beattie’s writing. Her book, Codependent No More (published in 1986 but still selling well), his work with Adult Children of Alcoholics (ACA), and attendance at seminars on the topic have made him a staunch believer. When I told him about the articles, he scoffed. In fact, he seemed offended. It’s a basic tenet that aligns with his experience of psychology.

So, what are the objections to the concept of codependency?

First of all, it’s not a recognized psychological condition in that it’s not an official diagnosis. There are no specific diagnostic criteria, though there is a list of symptoms including fears of rejection or abandonment, avoiding conflict, making decisions for or trying to manage the loved one, keeping others happy to the detriment of self, and generally a “focus on caretaking and caring for others to the point that you begin to define yourself in relation to their needs.” Admittedly, those are largely squishy criteria (there are others), some of which overlap with officially recognized diagnoses.

Another definition states, “The codependent person sacrifices their needs to meet the demands and expectations of the other person. These individuals may also strongly desire to ‘fix’ the other person’s problems. The individual often neglects their self-care and personal growth in the process.” This was developed in the context of addiction studies, and some people object to the concept being broadened to include other circumstances.

More significant is the idea that the concept pathologizes love and support. Interdependence is the natural function of intimate relationships, and depending on each other is the ideal. Codependency theory is said to downplay helping behaviors that are essential to good relationships. In addition, codependency is often viewed as a “women’s problem,” and that reinforces patriarchal stereotypes, such as that women are “needy.” Instead, a person labeled codependent should work on overwriting old scripts of anxious attachment and other negative feedback loops.

Codependence is said to have contributed to the “tough love” movement that involved a hands-off approach to a loved one’s addiction, allowing them to experience the natural consequences of their behaviors. Tough love is discredited these days as a form of verbal abuse and a philosophy that has no basis in psychological practice, as well as reinforcing the idea that an addict must hit “rock bottom” before they are able to accept help. Tough love also promoted a model of intervention as a process involving anger, blame, non-compassionate confrontation, and the use of psychologically damaging “boot camps” for troubled teens.

Then, too, it is said that there is no research validating the concept of codependency, no way to measure it, and no effective treatment for it.

There’s another point of view, though—that codependency is a real, serious problem.

Let’s take that last point first. Research on codependency has revealed specific behaviors associated with it and the tendency to repeat those behaviors in subsequent relationships. Research has also indicated sex differences in codependency, with women being more likely to suffer from it. (It should be noted that women also dominate in diagnoses such as depression. Both genders are affected by depression and codependency, however.) As with codependency, there are statistics to report how many people suffer from depression and other conditions, but none to say how severe their condition is. Also, codependency has its roots in attachment theory, family systems theory, and trauma studies.

Treatment for codependency is quite possible. Education, individual therapy, couples/family therapy, group therapy, CBT, and DBT have all had beneficial effects. Even 12-step programs such as Codependents Anonymous are possible ways to address codependency. And, like some other disorders, codependency responds to techniques such as boundary setting, building on strength and resilience, and self-care. It also has other characteristics common to other conditions—relapses and setbacks, for example.

As for the idea that codependency pathologizes love and support, it is true that these qualities are essential to the human experience and good things in and of themselves. But when those qualities get hijacked by excessive, misdirected, and exaggerated needs, they can become pathological. After all, moderate depression and anxiety are parts of the human experience too, but when they strike with extreme manifestations, they become pathological as well. To say that all expressions of love and support are good is to ignore the harm that they can do when they interfere with those normal experiences of human interaction.

And while the concept of codependence may have started in the field of addiction studies, there’s no indication that that’s the only place where it belongs. Plenty of psychological concepts begin in one area of study and expand into others. The idea of healing the inner child may have started with trauma studies, but it now applies to other areas as well, such as grief therapy and other abandonment issues (including codependency).

What does all this add up to? I think my husband and the proponents of codependency theory have a point. The fact that it hasn’t been sufficiently studied doesn’t mean that it doesn’t exist, just that it is a comparatively recent idea compared to other conditions and pathologies. It has demonstrable effects on relationships and makes logical sense. If two people become enmeshed, their behaviors are likely to become warped and dysfunctional. In fact, dysfunction is one of the hallmarks of codependency. It explains relationship dysfunction in a way that few other concepts do. It may not be the only relationship hazard, but it checks a lot of the boxes.

Sure, the term codependency has been overused, especially in the type of pop psychology promoted by assorted self-help articles and books. But so have other psychological concepts and societal problems. Just because gender studies has had limited usefulness in analyzing male and female communication styles doesn’t mean that it has nothing to tell us.

So, do I think that the concept of codependency is a myth? No. Do I believe that it’s a “hoax,” as some have claimed? Again, no. Is the concept itself toxic? Does it imply that love and support are invalid? No. Is it overused by people who don’t understand it? Certainly. Does codependency deserve more study and practice before we discard it? Definitely.

I’ve seen codependency working in people’s lives. Anecdotal evidence isn’t sufficient to prove its reality, of course, but it’s a starting point for further exploration by professionals. Just because something doesn’t appear in the DSM, a notoriously changeable document, doesn’t mean it’s not real.

Mental Illness: Poverty and Privilege

Mental illness is not just an American problem. In fact, it’s a problem around the world, and perhaps much more acute in other nations, especially those plagued by poverty.

There’s no way to know for sure, but many – perhaps most – of the world’s mentally ill are undiagnosed, untreated, ignored. Because what do you do when you live where there’s no psychiatrist? No therapist. No medication. No help.

Your family may support you, shelter you, or shun you, depending on their financial and emotional resources and those of the community. But for many people, there is simply nothing.

Psychiatrist Vikram Patel, one of Time magazine’s 100 Most Influential People for 2015, is working to change that.

As a recent profile in Discover magazine put it, Patel and others like him have set out to prove “that mental illnesses, like bipolar disease, schizophrenia, and depression are medical issues, not character weaknesses. They take a major toll on the world’s health, and addressing them is a necessity, not a luxury.”

In 2003, Patel wrote a handbook, Where There Is No Psychiatrist: A Mental Health Care Manual, to be used by health workers and volunteers in poverty-stricken communities in Africa and Asia. A new edition, co-written with Charlotte Hanlon, is due out at the end of this month.

Patel, in his first job out of med school, in Harare, Zimbabwe, says he learned that there wasn’t even a word for “depression” in the local language, though it afflicted 25% of people at a local primary care clinic. There was little study of diagnosis and treatment in “underserved areas.”

Later epidemiologists learned to their surprise that mental illnesses were among the top ten causes of disability around the world – more than heart disease, cancer, malaria, and lung disease. Their report was not enough to spur investment in worldwide mental health.

Patel developed the model of lay counselors – local people who know the local culture – guiding people with depression, schizophrenia, and other illnesses through interventions including talk therapy and group counseling. By 2016, the World Health Organization (WHO) admitted that every dollar invested in psychological treatment in developing countries paid off fourfold in productivity because of the number of people able to return to work.

One objection voiced about Patel’s model is that the real problem is poverty, not depression or other mental illness. The argument goes that the misery of being poor, not a psychiatric illness, leads to symptoms and that Westerners are exporting their notions of mental health to the rest of the world, backed up by Big Pharma. Patel responds, “Telling people that they’re not depressed, they’re just poor, is saying you can only be depressed if you’re rich … I certainly think there’s been a transformation in the awareness of mental illnesses as genuine causes of human suffering for rich and poor alike.”

Of course the problem of underserved mentally ill people is not exclusive to impoverished nations. There are pockets in American society where the mentally ill live in the midst of privilege, but with the resources of the Third World – the homeless mentally ill, institutionalized elders, the incarcerated, the misdiagnosed, those in rural areas far from mental health resources, the underaged, the people whose families don’t understand, or don’t care, or can’t help, or won’t.

I don’t know whether Patel’s model of community self-help can work for those populations as well as they do internationally. This is not the self-help of the 1970s and 80s, when shelves in bookstores overflowed with volumes promising to cure anything from depression to toxic relationships. It would be shameful if the rich received one standard of care for mental health problems, while the poor had to make do with DIY solutions, or none.

But, really, isn’t that what we’ve got now?

Pill-Shaming

When I first started taking Prozac, when it was just becoming ubiquitous, my mother said, “I hear it’s a ticking time bomb!”

“Oh, dear!” I thought. “Mom’s been listening to Phil Donahue again.” (She had been, but that’s not the point.)

Back in the day, Prozac was hailed as a miracle drug and condemned as a killer drug. On the one hand, it was said to be a “magic bullet” for depression. On the other, it was supposed to result in addiction and suicide.

It’s probably true that it was prescribed too often to too many people who may not really have needed it. And it may have led to suicides—not because Prozac prompted such an action, but either because it was improperly prescribed or because it activated people who were already passively suicidal and pushed them into action.

At any rate, Prozac was not an unmixed blessing.

For me, it was closer to a miracle drug. It was the first medication that had any significant effect on my depression. I noticed no side effects.

But Prozac is no longer the psychiatric drug of choice. Since that time, hundreds—maybe thousands—of psychotropic drugs have been introduced and widely prescribed. Many have proved just as controversial as Prozac. Indeed, the whole concept of psychiatric drugs is now controversial.

I belong to a lot of Facebook groups that encourage discussion on psychological matters and have a lot of Facebook friends with opinions on them, sometimes very strong ones. Some of the people with the strongest opinions are those who condemn certain classes of psychiatric drugs or that category of drugs altogether. They share horror stories of addiction, atrocious side effects, zombie-like behavior, and even death from the use of these drugs.

Benzos are the drugs that are most often condemned. And it’s true that they can be addictive if they’re misused. Whether that’s because a doctor overprescribes them or a patient takes more than prescribed I couldn’t say. But I maintain that benzos aren’t inherently harmful when prescribed appropriately and supervised professionally.

I have personal experience with benzos. They were the first psychiatric drug I ever took, meant to relieve a rather severe nervous tic that affected my neck and head during junior high school. I do remember walking off a short stepstool while shelving books in the library, but I was not injured and the misstep could be attributable to ordinary clumsiness, which was something I was known for (and still am). The benzos were discontinued when I got better. I also took benzos in college because of pain due to temporomandibular joint problems.

Now I have benzos that my psychiatrist prescribed “as needed” for anxiety and sleep disturbances. After all the years I’ve seen him and my history of compliance with prescribed medication, plus the very low doses, he had no hesitation prescribing, and I have no objection to taking them.

But some of the people I see online object to any psychiatric drugs whatsoever. Again, the most common complaints are addiction, side effects, and zombie-like behavior. Of course, I can’t—won’t—deny that they have suffered these effects. Psychotropics are known to affect different people differently. I’ve had side effects from many of the ones I’ve taken that were too unpleasant for me to continue taking the drugs. But after all the different meds I’ve tried during my journey to a combination of drugs that work for me, it would be a surprise if I objected to them altogether.

But I don’t. I’ve had cautious, responsible psychiatrists who’ve prescribed cautiously and monitored rigorously, listening to me when I reported side effects.

So, my personal experiences have been good. I know not everyone’s experiences have been, for a variety of reasons.

What I object to is the drumbeat of “all psychotropic medications are bad and ruin lives.” And the memes that show pictures of forests and puppies that say “These are antidepressants” and pictures of pills with the caption “These are shit.”

I hope those messages don’t steer people who need them away from psychotropic medications. And I hope that people who do need them find prescribers who are conscientious, cautious, and responsible in prescribing them. On balance, I think they’re a good thing.

Stigma, Prejudice, and Discrimination

Those of us with brain illnesses such as bipolar disorder, OCD, PTSD, and schizophrenia often speak of the stigma associated with our problems. It’s no wonder—stigma affects our lives in both predictable and unpredictable ways.

For instance, say you’ve become comfortable talking about your disorder. Then one day when you’re at a reunion or some other gathering, you happen to mention it and get the glazed-eyes-fixed-smile-back-away-slowly response. Sure, a lot of people don’t know what to say to you, but that reaction just makes it clear that you are different and, to that person, potentially a source of danger. Someone to be avoided. Someone not to engage with.

That’s stigma.

Prejudice is related to stigma. It’s just a short step away. Prejudice happens when people have a preconceived idea of what brain illness looks like. (That’s what prejudice means.) This could be a person who assumes that a serial killer or mass shooter is obviously “insane.” Their assumptions are reinforced when it’s revealed that the perpetrator had a history of psychological problems or had taken medication. They’re ignorant of the facts—that most killings are prompted by motivations such as rage, gender or racial hatred, jealousy, or fear. They don’t know that the mentally ill are much more likely to be the victims of violence than to be perpetrators.

People with prejudice against people with mental illness can also assume that psychiatric diagnoses are not “real.” They think people with these conditions can—and should—just “snap out of it,” “pull themselves up by their bootstraps,” or “get over” their problems. They look down on people who seek help. They make jokes about “crazies” and “lunatics.” They believe that anyone with a “real” mental illness is in a locked ward in an “insane asylum,” or should be. They don’t know that straightjackets aren’t used anymore and feel they’re funny Halloween costumes.

In other words, people who are prejudiced lack understanding and empathy.

Discrimination takes it one step further.

When people with brain illnesses suffer from discrimination, they lose opportunities because of their condition. If they are open about their diagnosis on applications, they may never receive a callback or an interview for a job. They may start receiving bad evaluations at work if they have to leave for doctor’s appointments or be let go for not getting along with other workers, many of whom may have prejudice against them. They don’t receive the accommodations required by the Americans with Disabilities Act (ADA).

Discrimination can also be involved with decisions from Social Security Disability. It’s not supposed to be that way, but people with mental illnesses are likely to have more difficulty “proving” that they have a disability severe enough to warrant supplemental income.

So what’s to be done? Education is the solution we always advocate. But it’s a hard ask. It’s difficult to get anyone to learn about the realities of brain illnesses. They don’t learn about it in school, and the messages they get from the media do little except reinforce the stigma surrounding the various conditions. In fact, they perpetuate much of the stigma.

Pushback is another strategy. We simply cannot let it pass when someone makes a prejudiced remark or demonstrates a lack of understanding. We can speak up about inappropriate Halloween costumes or assumptions about violence and the mentally ill. We can inform others that not all homeless persons are mentally ill. In fact, most homelessness is caused by a lack of affordable housing and low wages.

When it comes to discrimination, legislation and activism are often the solutions or at least the beginnings of them. Lobbying efforts regarding policy and treatment will help. Lots more needs to be done to inform legislators about the very basics, much less the possible ways to address the problems. Reporting violations of the ADA may not lead to resolutions, but it still needs to be done.

Of course, it’s difficult for many people with brain illnesses to do these things. We are frequently isolated and doubt our own abilities. Confronting legislators, educating them, and lobbying for their attention is daunting. Neurotypical people have trouble doing it, especially without an organization that gives them leverage. But it’s work that needs to be done. I admit that I’m not at the forefront, though these blog posts and my books are intended to help educate, and the groups I belong to try to do likewise.

It’s not enough. But it’s a start.

Positivity and Acceptance

Those who follow this blog have seen me rail against toxic positivity. When it’s not absurd, it’s insulting to those of us with mood disorders. No, we can’t just cheer up. If we could look at the bright side, we wouldn’t have depression or anxiety. You may be able to choose happiness, but I can’t. I’ve needed medication and therapy just to feel meh at times. If I could turn bipolar disorder off like a light switch, don’t you think I’d do it?

Toxic positivity can be seen nearly everywhere, in a lot of different situations: the self-help movement, of course, but also business, medicine, and even religion – as well as endless memes. American society is rife with toxic positivity. It appears in motivational business conventions and TED Talks. Salespeople are advised to think positively and envision success. Breast cancer survivors are advised to keep a positive attitude, to the extent that they are encouraged to tell how the disease has had a positive effect on their lives and relationships. (Expressions of fear, anger, and other natural emotions in response to the diagnosis are downplayed or discouraged.) Religions can exhort us to count our blessings or “manifest” our wants and needs by using positive thoughts to attract them.

Positivity becomes toxic when it is seen as the only method of coping with problems in life, even ones that have other solutions or none. Toxic positivity presents relentless cheer as the only acceptable reaction and a panacea for every difficulty. And toxic positivity leads people to demand that others take up the mindset and apply it to every situation, even devastating ones. As such, it denies the reality of human suffering and normal emotional responses. It’s a form of non-acceptance.

So, what is the alternative? What is a more natural – but still effective – technique for dealing with difficulties? How can those of us who have mood disorders or any other brain illness find ways to navigate through life without slapping on a smile and coercing our emotions to fit a certain mold?

Radical acceptance is one answer. Radical acceptance means that you accept your inner feelings and your outward circumstances as they are, especially if they are not under your control. You acknowledge reality without trying to impose a set of emotional mandates on it. Your acceptance and acknowledgment may involve pain or discomfort, but those are understandable, normal human conditions. They are natural conditions that evoke a natural response.

Rooted in Buddhist teachings and given a name by Marsha Linehan, the psychologist who developed dialectical behavior therapy (DBT), radical acceptance uses mindfulness to help people learn to face and regulate their emotions. Interestingly, one 2018 study found that accepting your negative emotions without judgment is a factor in psychological health.

With radical acceptance, when you encounter difficult situations and emotions, you note their presence without trying to suppress them. You accept them, as the name implies. This attitude can address – and reduce – feelings of shame and distress that you may feel, especially when you are not able to simply shut off those feelings and replace them with positivity. That doesn’t mean that you wallow in unpleasant feelings or allow unfortunate circumstances to stunt your responses.

Instead, you note the feelings – accept that they exist – and “hold space” for them within you. You appreciate that your emotions can lead you to new understandings of and reactions to your circumstances. For example, instead of adhering to the unattainable maxim that “Failure is not an option,” you can recognize when you have indeed failed and accept it as a natural part of life. You can then move on to a mindset of growth where you use that failure to inform your future actions. You develop a more accurate picture of the world and can begin implementing real solutions.

Of course, there are situations where radical acceptance is not appropriate. Abusive situations, for one, shouldn’t simply be accepted without being addressed. But neither will positive thinking resolve them. They require action, from seeking help from a trusted individual to leaving the situation to contacting law enforcement or an organization that can help.

But in other circumstances, radical acceptance may be an answer for some. For myself, I’ll just be satisfied if radical acceptance helps drive out toxic positivity. I don’t think it will, but a person can dream.

The Fire and the Window

fire orange emergency burning

When Anthony Bourdain died by suicide and I told someone the news, he asked me, “Why?”

I was taken aback. “What do you mean, ‘why’?” I replied.

“You know,” he said. “Did he have money trouble? Break up with his girlfriend? Have some disease?”

That’s a common reaction to suicide and it’s uninformed. Real-life stressors can contribute to suicide, but they are almost never the whole story. People die by suicide when the pain of living seems greater than the pain of dying.

Gregory House, the misanthropic, genius title character of House, M.D., once said, “Living in misery sucks marginally less than dying in it.” People who kill themselves don’t believe that. They believe the opposite.

The best metaphor I ever heard for suicide was the plight of people in the World Trade Center’s upper floors on 9/11. There were the flames. There was the window. And that was the choice. Suicide happens when a person sees only two alternatives and both are equally horrible, or nearly so.

The bullied child does not take her own life because she was bullied. She was in pain, for a variety of reasons that included bullying. It was a factor, but it wasn’t the reason. She was hurt. She was isolated. She was depressed. She didn’t believe that things would improve. She wanted the pain to stop. She believed she faced the choice between the fire and the window.

The politician who dies by suicide in the face of a major scandal does not kill himself because of the potential scandal. He dies because he sees his choices limited to shame, humiliation, despair, and ridicule. He believes that what happens to him will be as bad as dying. He is caught between what he sees as the fire and the window.

Mental illness can make it difficult to see that there are other choices. The distortions of thinking associated with serious mental illness can make us see only the fire and the window.

The one time that suicidal ideation got the better of me and I was close to making the choice, my thinking was just that twisted. I was faced with a choice that seemed to me would ruin someone I loved. I thought that I could not live with either choice – to ignore the behavior or to turn him in. One was the fire and the other, the window.

My thinking, of course, was severely distorted by my mental disorder. The thing that I thought might rain destruction on the other person was much smaller than I believed. There were ways out of the dilemma other than dropping a dime or killing myself. If we continue the metaphor, the fire was not that big, or that implacable, or that inevitable, but I couldn’t see that. In the end, I hung on long enough for my thinking to clear and for me to see other options.

I don’t actually know what was going on in the minds of the souls who were trapped in the Twin Towers. I don’t mean to lessen the horror of their deaths or wound their families by speaking of suicide this way. The reality of their choice is so far distant from the choices that other people who consider suicide face.

But that’s kind of the point. People who die by suicide don’t see any other way out. If they seem to be responding to what most people see as survivable hurts or solvable problems, people say they can’t understand how someone that rich, that successful, that beloved, that full of potential could have not seen that help was only a reach away.

The person who dies by suicide doesn’t see the hand reaching out. Only the fire and the window.

If you are considering suicide, call the Suicide and Crisis Lifeline: 988.