Thursday, November 8, 2012

Primum non nocere


“First, do no harm.”

I’ve been reflecting on this phrase more often lately. It’s thrown around quite a bit when you read, write, and talk about medicine. Do we mean we should never try things that might hurt someone? Do we avoid certain types of risk? Who decides what the risk/benefit ratio is? Do these principles apply to our personal lives? Do we avoid certain confrontations? Do we shy away from the hard questions?

People bring up this idea of doing “no harm” in all different contexts; they use it to mean slightly different things and they often misquote it as part of the Hippocratic Oath. 




While we’re on it, the Hippocratic Oath contains a few of my new favorite phrases, roughly translating as follows:

“…I will come for the benefit of the sick, remaining free of all intentional injustice, of all mischief…”

“…what I may see or hear in the course of the treatment or even outside of the treatment in regard to the life of men, which on no account one must spread abroad, I will keep to myself holding such things shameful to be spoken about...”

“…if I fulfill this path and do not violate it, may it be granted to me to enjoy life and art...” 

That is some seriously good stuff about pure intents, confidentiality, and the art of medicine. And, much as I love reading and talking about it, I occasionally end up with overwhelming guilt if I think too long about whether my motives really are always pure and about exactly how to combine the science and art of medicine. 

So this, my friends, is as good a segue as any into a brief discussion about the interplay among science, religion, and homosexuality. Or not. Hear me out, though; I think it’s all intimately related to the idea of not doing harm and – we hope – actually doing some good. 


Learning about why religion and homosexuality don’t have to be mutually exclusive could be fun even if you don’t identify as religious or homosexual

There are a variety of reasons to learn about things. Well, maybe what I want to say is that you don’t need a reason to keep learning and reading broadly, even if you are supposed to be studying a very specific field. Could I be learning the names of arteries and veins and nerves right now? Yeah, sure, but whether you are a doctor or a lawyer or a professor or a seamstress or an equestrian, it’s fun to try to keep up some level of diversity of interests. (Oy vey, I know, I should stop already with the tangential discussing-bordering-on-pontificating.) Anyhow, after some recent talks with a Lutheran pastor, an Orthodox Jewish rabbi, a Jewish Hillel leader, and a Presbyterian minister, I realized how disastrously behind I was on some extraordinarily important (and interesting!) reforms taking place. On a more basic level, there were some religious texts I needed to re-read for myself and some wandering thoughts that I needed to let wander. 


Diving right back into the Bible

For years, I’ve been a big fan of the concept of “biblical interpretation” and not the technique of “literal reading of every single phrase.” Biblical stories are some of my favorite in the world and I’ve learned countless invaluable lessons from my Bible. Yet, I have never been tempted to literally interpret passages describing when a woman should be stoned or when the enemies should be slain or when a hand should be cut off. The idea that I ascribe to is that scripture is inspired but not infallible or beyond question. Interestingly, when you look through the ENTIRE Bible, there are only about five to seven references (depending on how you count) to issues of homosexuality. Contrast that number with the approximately 2,000 references to providing shelter for the homeless, clothes for the naked, and love for the unwanted. Further, some of those references to homosexuality are not, perhaps, even supposed to get across laws about homosexual behavior. 




It is critical to look at the original Greek text if you want to try to extrapolate meaning for yourself; take, for instance the Greek malekos, which literally means “soft” but is used to describe a young or effeminate man. Other Greek words for “effeminate” have meanings that translate to “man-woman” and “man who enjoys penetration by another man.” In certain biblical passages, the position of the word suggests the “soft and young” definition, in which case we are talking more about pederasty than homosexuality. Some verses often cited by homophobic leaders are stories of, say, an elderly man paying a young boy to provide him with sexual pleasure. Molestation or purchase of children of any gender is a deeply disturbing problem, to be sure, but it has no place in an argument against homosexual relationships that are based on love and mutual respect.

I’m also intrigued by the idea that Paul, who wrote a solid chunk of the New Testament, might have had no concept of sexual orientation. What if he just thought everyone was straight? Maybe he thought he was condemning people for acting in a way that was unnatural to them as individuals. If you keep on with that train of thought, you wind up with the idea that we should now condemn homosexual men who are married to women and having sex with them. I won’t continue with that tangent, but it’s food for thought.



Christian reformation, here and now

Some official changes have been going on throughout the last decade. To take a specific example, the Evangelical Lutheran Church of America made two major amendments during their national convention in 2009. Now, in the Lutheran Church, it’s now possible for LGBT members – even those who are in official relationships – to be ordained as pastors. Additionally, if the individual congregation is in support of it, pastors can bless same-sex marriages. One of the most touching stories I heard recently, though, had nothing to do with an official change of policy, but was an individual man’s story of how he and his wife came to terms with their son’s coming out. An ordained minister, this man was of a conservative upbringing and openly admitted that he and wife were distraught when their son first tried to talk to them about his homosexuality. Of course, I was also delighted to hear that it was their family physician who encouraged them to continue being open about how they were dealing with news, and it was also the physician who so vehemently encouraged them not to let it break up the family. Currently, their whole family is part of the national Presbyterian Church that, as of this year, will officially ordain gay pastors. During his visit to the University of Michigan, he emphasized to us that the overriding message of Christianity is about life, healing, compassion, and inclusiveness. I couldn’t agree more. The narrative of resurrection in the New Testament, read through this lens, is one of alleviating suffering and injustice. I would argue that this goes far beyond that first creed of doctors to simply “do no harm.” We need to make an active effort to do good. Do something little, every day. Personally, I need to learn more about many issues before I can even start to figure out how to avoid hurting others (or, better yet, help them).


Some thoughts about Judaism

Many queer Jews will tell you it’s still pretty hard, but the level of acceptance for LGBT members varies greatly among the different sects of Judaism. Orthodox Judaism remains staunchly anti-gay. Some rabbis do acknowledge that there are people with homosexual tendencies…and they recommend “reparative therapy,” or – for men – marriage to women to practice and encourage healthy sexual relations with the opposite sex. There is the increasing sentiment among the Orthodox that since God loves all people, they should not be blamed, per se, for experiencing “homosexual feelings,” and that we need to balance our compassion and our judgmentalism. I’m worried that this stance might in some ways be more harmful, since this tends to situate homosexuality as a “feeling” or “illness” to be taken care of. A friend of mine pointed out that C
alifornia recently became the first state to ban conversion therapy for minors, terming it not only flat-out quackery but also blatantly harmful. After signing the bill, Governor Jerry Brown announced that he supported it because it bans “non-scientific ‘therapies’ that have driven young people to depression and suicide.”

Reformed Jews are more liberal than either Orthodox or Conservative sects; they tend to cite the parts of Genesis referring to when God created all people, male and female, “in His image” as evidence of His love for any and all genders and sexualities. Since 1965, they have advocated for LGBT rights. (Note that this was nearly a decade before the American Psychiatric Association declassified homosexuality in 1973, officially removing it from the list of mental disorders.) Within Reform communities, gay and lesbian marriages are viewed as holding the same weight as heterosexual, traditional marriages. A local Ann Arbor, Michigan synagogue, for instance, performs same-sex marriage ceremonies despite the fact that Michigan law does not recognize these unions. Interesting, I think, when religious groups can be MORE liberal-minded that the dominant politique.






And we’re back to the big idea: do some good

There is an unfortunate and dramatic difference between changing doctrine and changing people’s views (perhaps in some respects analogous to the difference between passing a law and seeing it enforced). People will continue to conduct themselves as they see fit, so even in religious communities that officially allow gay people to be ordained, for instance, there is still a fair bit of discrimination. You can’t legislate morality and attitudes and a lot of us are guilty of operation under the dominant, heteronormative, patriarchal assumptions. It takes a huge amount of effort to fight these forces, especially given the insight it takes to even recognize our own biases. It’s rough. I literally squirm with guilt and embarrassment each time I uncover a prejudice I’ve been harboring. Do I have gay and lesbian friends? Of course. Do I understand all that they are up against? Not at all. For me, at least, I hope to move far beyond “do no harm” and contemplate how to actually “do good.”

Tuesday, October 23, 2012

Get over it: depression is real


“Get over it!”

I am not talking to the people who are clinically depressed and saying “snap out of it.” I am talking to the people who continue to question the legitimacy of major depressive disorders and I’m saying “get over yourself.” I can tell you right now that when you have close friends or relatives go through this, you will no longer try to explain away concerns by telling people that everyone has bad days, that they are simply tired and cranky, and that once they get fresh air they’ll realize life is just dandy.


Depression is a chronic illness and we need to treat patients – and their friends and families – with the respect and care they deserve. It’s like diabetes or high blood pressure: it’s real and it’s treatable. So, the next time someone asks you if depression is a real illness or someone being wimpy about their sadness, please set them straight for me. Further, if you recognize the symptoms we’ll talk about, dare yourself to start a conversation with the person you’re concerned about; as far as treatment goes, the sooner the better. (I’ve already talked a bit about how the Affordable Care Act will impact coverage of psychiatric illness. In the future, I promise you posts about alcoholism, anxiety, schizophrenia, and obsessive compulsive disorder. And, yes, you guessed it: these will all come to you from the perspective of someone who has friends and family who have suffered.) 


Back to the basics

A recent seminar at the medical school at Michigan left many students with questions (some as blatantly inappropriate as “when did we decide depression was an illness we should prescribe drugs for?”), so I’ll go back to cover a few basics. The Diagnostic and Statistical Manual of Mental Disorders (DSM) IV defines major depressive disorder as a syndrome, where the diagnosis requires a patient to have at least five of the nine symptoms, consistently, for more than two weeks. These symptoms are what you might presume: depressed mood most of the day, diminished interest in almost all activities, significant weight loss or gain, sleep changes,  physical agitation, intense fatigue, excessive guilt and feelings of worthlessness, diminished concentration and indecisiveness, and suicidal thoughts. For this post, I’m focusing on major, unipolar depression, but some of what we’ll talk about is also applicable to bipolar disorder, particularly given that people with bipolar tend to experience “lows” that are longer and more severe than the “highs.”


Over the course of a lifetime, 16% of people get depressed. That’s a lot. Of those people, only half get treated, despite what we now know about the host of associated problems, ranging from loss of productivity to increased cancer mortality. A few other interesting numbers: nearly 15% of women experience post-partum depression, about half of patients who have experienced a heart attack also develop depression, and more than a quarter of adult Americans suffer from a diagnosable mental disorder in a given year. Those numbers are so high that it’s essentially impossible that you don’t know someone who has dealt with mental illness.

Disease progression in depression is an interesting phenomena and it has much to do with brain chemistry, some of which remains incompletely understood. What we do know for sure, though, is that after one episode, there is a 50% chance of another depressive period; after a second episode, there is an 80% chance of a third. For the first depressive period, a relatively great stressor is required, but the threshold for stress lowers during subsequent episodes until eventually the trigger is endogenous. Suicidal thoughts and behaviors, arguably the most alarming symptom of depression, are actually more related to impulsiveness than severity of depression. In fact, hospitalization is basically recommended just to give patients the time to change their minds. In studies of people with suicidal intentions who had jumped off the Golden Gate Bridge and survived, it’s found that the vast majority changed their minds on the way down. Apparently for some people the impulsiveness passes rather quickly. And, no, I’m obviously not at all proud that our quintessentially Californian bridge is the world’s leading suicide site.



So what’s one to do? Think about treatment. And then start right away.

Behavioral changes are the first line of attack. Most doctors, psychologists, psychiatrists, and therapists will recommend changes in exercise, sleep hygiene, and the like. In conjunction with these lifestyle changes, cognitive behavioral therapy (CBT) can have enormously positive influences. What exactly is CBT? It’s the best thing since sliced bread (which actually sometimes gets stale, anyhow). CBT addresses specific maladaptive behaviors and thought processes, providing patients with explicit goals and systematic tools. It’s incredibly effective for a whole host of disorders, including substance abuse, eating disorders, and major depression. This – I promise you – is solid evidence-based practice. For many psychiatric illnesses, current environmental stressors are a much better predictor of disease progression than are early life factors. That is to say, recent life stress and chronic stress matter more than a dysfunctional childhood. Psychodynamic and Freudian types of talk therapy analyses are decidedly out of vogue…since they are ineffective and were never based on hard science. CBT, in contrast, won’t necessarily uncover all of your secret histories and deepest conflicts, but will give you concrete tools to take with you out the door and apply to your life, right that day. It helps you identify your own, specific, individual negative beliefs and behaviors and learn to replace them with positive, healthy ones. The principle of the therapy in some ways stems from the age-old advice that even if you can’t change a situation you can change the way you think and respond.

For many people, behavioral changes are not enough, though, as the brain chemistry is already too out of whack to rebalance itself without some help. There is nothing wrong with needing medication and taking it says absolutely nothing about a person’s intellect, personality or strength. If you are even remotely tempted to think otherwise, please feel free to join the rest of us in this century, along with common sense, decency, and science.

In case I get a reader who doesn’t speak Science (and since I’m not completely fluent in that language myself), we’ll stick with fairly basic information on drugs. The vast majority of doctors start off by prescribing a selective serotonin reuptake inhibitor (SSRI, such as Prozac), which increases serotonin in the synapse of nerves. To make a long story short, serotonin is a neurotransmitter that relays messages between brain cells that regulate mood, appetite, sleep, learning, and sexual desire. Interestingly, the biochemical change happens in hours yet the antidepressant effect usually takes weeks. We don’t completely understand the mechanistic pathways involved and we haven’t ever measured levels of serotonin in a living human brain. We’ve measured blood levels of neurotransmitters such as serotonin (and, yes, levels are lower in depressed people), but we actually don’t know how well blood levels correspond to brain levels. Additionally, there is dispute among researchers about exactly why SSRIs are effective, with some scientists claiming that it has less to do with the increased serotonin-mediated messages and more to do with a regeneration of brain cells that is induced by serotonin.


Other commonly used drugs include serotonin and norepinephrine reuptake inhibitors (SNRIs, such as Cymbalta) and norepinephrine and dopamine reuptake inhibitors (NDRIs, such as Wellbutrin). The pertinent biochemistry here is that norepinephrine is a stress hormone affecting attention and memory and dopamine is a neurotransmitter implicated in the reward-driven learning system. The rest of the antidepressants out there fall into one of three categories: tricyclic antidepressants (yup, the chemical structure has three rings; they’re not used much anymore), atypical antidepressants (they don’t fit neatly in another category; they’re often taken in the evening to help with sleep), and monoamine oxidase inhibitors (MAOIs, usually prescribed as a last resort since there can be deadly interactions with certain foods). A little side note: MAOIs were discovered back in the 1950s, during trials for new drugs to treat tuberculosis. It turned out that the medication had psychostimulant effects on the tuberculosis patients. The history of medicine is crazy! (Pun intended. Pun always intended, tasteful or otherwise.)

For many, finding the right medication takes a while. It’s a process of trial and error that frustrates patients for many reasons: some medications take up to two full months to have effect, the side effects are often worse at the beginning, and there can be withdrawal symptoms if you stop too abruptly. There are a few recent developments that can help patients find the right medication, including a blood test to check for specific genes we think affect how your body uses antidepressants. The cytochrome P450 (CYP450) genotyping test is one such predictor of how an individual might metabolize a medication.

Last resorts include ketamine and electroconvulsive therapy; we usually save these for patients whose depression has resisted other forms of treatment. Ketamine causes brief psychotic episodes, working much like phencyclidine (PCP, aka “angel dust”) and other hallucinogens used recreationally. After a few hours, ketamine then works as an antidepressant through its action as a glutamate antagonist. In striking contrast to other drugs options, electroconvulsive therapy (ECT) is a treatment that actually involves passing electrical currents through the brain, theoretically to affect levels of neurotransmitters and growth factors. Despite the suspicion surrounding ECT and its side effects, it typically gives immediate relief even when other treatments have failed. Of all depressed people treated, only 1 to 2% receives ECT.



Sadly, there’s still so much we don’t know

Depression is undoubtedly a veritable illness, but it’s not only “misunderstood” in the social sense of the term, but also within realm of science. Given that it does have a biological basis, perhaps it’s worth considering how evolution allows this to continue happening. Why would this ever confer an advantage? Why are there still people in the population who are predisposed? How on earth does this play into survival of the fittest? I have a few speculative thoughts, based – as per usual – on personal observation and somewhat anecdotal evidence.

Rates of depression are much higher in first-world countries, which might suggest something about how our lifestyles have changed faster than evolution can keep up. We are experiencing stress and anxiety at a whole new level, what with this new-fangled concept of “long-term goals” and “delayed gratification.” I’m certainly in no position to pooh-pooh lengthy endeavors, given that I’m committed to another decade of education and training, but I do sometimes wonder if perhaps people in third-world countries – living closer to that hunter-gatherer lifestyle – are better adapted to their types of daily stress. There is evidence related to other mental illnesses, too; for instance, schizophrenic people have a much better prognosis in rural India, likely due to the familial and social structures in place and the markedly decreased stigmatization compared to industrialized areas.


The strong correlation between depression and anxiety is also worth touching on. At first glance, it may seem counterintuitive that these two are related, since they appear confusingly contradictory, but it turns out that many people experience both and that one predisposes you to the other. Without going into too many details, suffice to say that once you look at the neurotransmitters involved in states of arousal (the nervous energy kind, but also the sexual kind) and depression, it actually makes a lot of sense. There have been some particularly interesting recent studies with medical interns and residents (generally stressed, anxious people) about the differences in rates of depression between men and women. While about 20% of men developed depression, a full 40% of women did. Further, having a child during internship increased the likelihood of depression for women but had no effect of risk for men. More broadly, people who attended medical school outside of the United States were significantly less depressed. I’m not even going to try to go into that – way too many confounders and way too much opportunity to say things that are far from politically correct – but you can use your imagination as to why that might be.


There are some promising directions for future depression treatment research and it’s these questions we should be addressing, not the silly ones about whether it’s a major problem or not. Given what we see in patients with both depression and immune dysregulation, is it likely that cytokines play a role in disease onset? Would anti-inflammatory drugs alter depressive symptoms?  Since we see so much depression after strokes and heart attacks, should we investigate other cardiovascular factors? As about 40% of variance in depression can be attributed to genetics, should we devote more effort to searching for specific genetic polymorphisms? How useful is functional magnetic resonance imaging (fMRI) in studying the changes in activity and connectivity of depressed patients? Is it relevant that depressed people have smaller hippocampi and that their amygdalae are more strongly connected to their prefrontal cortices?

Scientist or not, I hope you learned something today…or, at the very least, revisited and reflected on important issues. If you take away anything at all, let it be this: depression is real, chronic, and treatable. Some of the most incredible people I know have dealt with it and mental illness should never, EVER be stigmatized. Let’s get on with it, progress with research, and talk about it openly.


Monday, October 15, 2012

Which came first: leech the doctor or leech the worm?


Comparative philology is a great combination of linguistics and history, so let’s do a little bit. We might wonder why leech used to be a word for the doctor and at the same time for the worm used by the doctor. Obviously, this begs the question of which came first: leech the doctor or leech the worm?
 

Dr. Lewis Thomas knew how to go about answering questions the right way: “The evolution of language can be compared to the biological evolution of species, depending on how far you are willing to stretch analogies. The first and deepest question is open and unanswerable in both cases: how did life start up at its very beginning? What was the very first human speech like?”

He was really THE man, Thomas was.

Anyhow, let’s get back to language and species. Since fossils exist for both, we can trace back to near the beginning. Prokaryotes were the earliest forms of life and they left prints in rocks dating back to 3.5 billion years…actually, on second thought, let’s skip some of the biology.

Of course, language fossils are more recent, but they can also be scrutinized and theororized (fine, I know it’s theorized). Words are fascinating and they can most likely be rooted back about 20,000 years to Indo-European ancestors of Sanskrit, Greek, Latin, and Germanic tongues, among others. Take our example of the two types of leeches and you see why it’s not all clear on the surface. Leech the doctor goes back to the word “leg,” meaning “to collect” and with derivatives meaning “to speak.” Implications of knowledge started to follow the word, as it became “laece” in Old English, “lake” in Middle Dutch, and “lekjaz” in early German. Leech the worm is classified by the Oxford English Dictionary as being “lyce” in the tenth century, then “laece” a bit later, and “leech” in the Middle Ages. At some point, leech the doctor and leech the worm (and, incidentally, leech the tax collector) fused into the same general idea of collecting (blood, fees, etc.).

Other medically related words also have interesting stories. “Doctor” comes from “dek,” meaning proper, acceptable and useful. In Latin, this became “docere” – to teach – and “discere” – to learn and disciple.


The word “medicine” came about from the root “med,” used to refer to measuring things out. In Latin, “mederi” meant “look after and heal.” The same root gave rise to words such as “moderate” and “modest,” perhaps good words for current students and doctors to reflect on.

Admittedly, most discussions of medical etymology aren’t this mature. We learn about “cardiac tamponade” (where fluid accumulates in the sac surrounding the heart) and then realize that “tampon” is French for “plug.” We see images of the “tunica intima” (which lines arteries and veins) and whisper about how it’s like an intimate tunic. I could talk about words all day, every day. For now, over and out!