Tuesday, October 23, 2012

Get over it: depression is real


“Get over it!”

I am not talking to the people who are clinically depressed and saying “snap out of it.” I am talking to the people who continue to question the legitimacy of major depressive disorders and I’m saying “get over yourself.” I can tell you right now that when you have close friends or relatives go through this, you will no longer try to explain away concerns by telling people that everyone has bad days, that they are simply tired and cranky, and that once they get fresh air they’ll realize life is just dandy.


Depression is a chronic illness and we need to treat patients – and their friends and families – with the respect and care they deserve. It’s like diabetes or high blood pressure: it’s real and it’s treatable. So, the next time someone asks you if depression is a real illness or someone being wimpy about their sadness, please set them straight for me. Further, if you recognize the symptoms we’ll talk about, dare yourself to start a conversation with the person you’re concerned about; as far as treatment goes, the sooner the better. (I’ve already talked a bit about how the Affordable Care Act will impact coverage of psychiatric illness. In the future, I promise you posts about alcoholism, anxiety, schizophrenia, and obsessive compulsive disorder. And, yes, you guessed it: these will all come to you from the perspective of someone who has friends and family who have suffered.) 


Back to the basics

A recent seminar at the medical school at Michigan left many students with questions (some as blatantly inappropriate as “when did we decide depression was an illness we should prescribe drugs for?”), so I’ll go back to cover a few basics. The Diagnostic and Statistical Manual of Mental Disorders (DSM) IV defines major depressive disorder as a syndrome, where the diagnosis requires a patient to have at least five of the nine symptoms, consistently, for more than two weeks. These symptoms are what you might presume: depressed mood most of the day, diminished interest in almost all activities, significant weight loss or gain, sleep changes,  physical agitation, intense fatigue, excessive guilt and feelings of worthlessness, diminished concentration and indecisiveness, and suicidal thoughts. For this post, I’m focusing on major, unipolar depression, but some of what we’ll talk about is also applicable to bipolar disorder, particularly given that people with bipolar tend to experience “lows” that are longer and more severe than the “highs.”


Over the course of a lifetime, 16% of people get depressed. That’s a lot. Of those people, only half get treated, despite what we now know about the host of associated problems, ranging from loss of productivity to increased cancer mortality. A few other interesting numbers: nearly 15% of women experience post-partum depression, about half of patients who have experienced a heart attack also develop depression, and more than a quarter of adult Americans suffer from a diagnosable mental disorder in a given year. Those numbers are so high that it’s essentially impossible that you don’t know someone who has dealt with mental illness.

Disease progression in depression is an interesting phenomena and it has much to do with brain chemistry, some of which remains incompletely understood. What we do know for sure, though, is that after one episode, there is a 50% chance of another depressive period; after a second episode, there is an 80% chance of a third. For the first depressive period, a relatively great stressor is required, but the threshold for stress lowers during subsequent episodes until eventually the trigger is endogenous. Suicidal thoughts and behaviors, arguably the most alarming symptom of depression, are actually more related to impulsiveness than severity of depression. In fact, hospitalization is basically recommended just to give patients the time to change their minds. In studies of people with suicidal intentions who had jumped off the Golden Gate Bridge and survived, it’s found that the vast majority changed their minds on the way down. Apparently for some people the impulsiveness passes rather quickly. And, no, I’m obviously not at all proud that our quintessentially Californian bridge is the world’s leading suicide site.



So what’s one to do? Think about treatment. And then start right away.

Behavioral changes are the first line of attack. Most doctors, psychologists, psychiatrists, and therapists will recommend changes in exercise, sleep hygiene, and the like. In conjunction with these lifestyle changes, cognitive behavioral therapy (CBT) can have enormously positive influences. What exactly is CBT? It’s the best thing since sliced bread (which actually sometimes gets stale, anyhow). CBT addresses specific maladaptive behaviors and thought processes, providing patients with explicit goals and systematic tools. It’s incredibly effective for a whole host of disorders, including substance abuse, eating disorders, and major depression. This – I promise you – is solid evidence-based practice. For many psychiatric illnesses, current environmental stressors are a much better predictor of disease progression than are early life factors. That is to say, recent life stress and chronic stress matter more than a dysfunctional childhood. Psychodynamic and Freudian types of talk therapy analyses are decidedly out of vogue…since they are ineffective and were never based on hard science. CBT, in contrast, won’t necessarily uncover all of your secret histories and deepest conflicts, but will give you concrete tools to take with you out the door and apply to your life, right that day. It helps you identify your own, specific, individual negative beliefs and behaviors and learn to replace them with positive, healthy ones. The principle of the therapy in some ways stems from the age-old advice that even if you can’t change a situation you can change the way you think and respond.

For many people, behavioral changes are not enough, though, as the brain chemistry is already too out of whack to rebalance itself without some help. There is nothing wrong with needing medication and taking it says absolutely nothing about a person’s intellect, personality or strength. If you are even remotely tempted to think otherwise, please feel free to join the rest of us in this century, along with common sense, decency, and science.

In case I get a reader who doesn’t speak Science (and since I’m not completely fluent in that language myself), we’ll stick with fairly basic information on drugs. The vast majority of doctors start off by prescribing a selective serotonin reuptake inhibitor (SSRI, such as Prozac), which increases serotonin in the synapse of nerves. To make a long story short, serotonin is a neurotransmitter that relays messages between brain cells that regulate mood, appetite, sleep, learning, and sexual desire. Interestingly, the biochemical change happens in hours yet the antidepressant effect usually takes weeks. We don’t completely understand the mechanistic pathways involved and we haven’t ever measured levels of serotonin in a living human brain. We’ve measured blood levels of neurotransmitters such as serotonin (and, yes, levels are lower in depressed people), but we actually don’t know how well blood levels correspond to brain levels. Additionally, there is dispute among researchers about exactly why SSRIs are effective, with some scientists claiming that it has less to do with the increased serotonin-mediated messages and more to do with a regeneration of brain cells that is induced by serotonin.


Other commonly used drugs include serotonin and norepinephrine reuptake inhibitors (SNRIs, such as Cymbalta) and norepinephrine and dopamine reuptake inhibitors (NDRIs, such as Wellbutrin). The pertinent biochemistry here is that norepinephrine is a stress hormone affecting attention and memory and dopamine is a neurotransmitter implicated in the reward-driven learning system. The rest of the antidepressants out there fall into one of three categories: tricyclic antidepressants (yup, the chemical structure has three rings; they’re not used much anymore), atypical antidepressants (they don’t fit neatly in another category; they’re often taken in the evening to help with sleep), and monoamine oxidase inhibitors (MAOIs, usually prescribed as a last resort since there can be deadly interactions with certain foods). A little side note: MAOIs were discovered back in the 1950s, during trials for new drugs to treat tuberculosis. It turned out that the medication had psychostimulant effects on the tuberculosis patients. The history of medicine is crazy! (Pun intended. Pun always intended, tasteful or otherwise.)

For many, finding the right medication takes a while. It’s a process of trial and error that frustrates patients for many reasons: some medications take up to two full months to have effect, the side effects are often worse at the beginning, and there can be withdrawal symptoms if you stop too abruptly. There are a few recent developments that can help patients find the right medication, including a blood test to check for specific genes we think affect how your body uses antidepressants. The cytochrome P450 (CYP450) genotyping test is one such predictor of how an individual might metabolize a medication.

Last resorts include ketamine and electroconvulsive therapy; we usually save these for patients whose depression has resisted other forms of treatment. Ketamine causes brief psychotic episodes, working much like phencyclidine (PCP, aka “angel dust”) and other hallucinogens used recreationally. After a few hours, ketamine then works as an antidepressant through its action as a glutamate antagonist. In striking contrast to other drugs options, electroconvulsive therapy (ECT) is a treatment that actually involves passing electrical currents through the brain, theoretically to affect levels of neurotransmitters and growth factors. Despite the suspicion surrounding ECT and its side effects, it typically gives immediate relief even when other treatments have failed. Of all depressed people treated, only 1 to 2% receives ECT.



Sadly, there’s still so much we don’t know

Depression is undoubtedly a veritable illness, but it’s not only “misunderstood” in the social sense of the term, but also within realm of science. Given that it does have a biological basis, perhaps it’s worth considering how evolution allows this to continue happening. Why would this ever confer an advantage? Why are there still people in the population who are predisposed? How on earth does this play into survival of the fittest? I have a few speculative thoughts, based – as per usual – on personal observation and somewhat anecdotal evidence.

Rates of depression are much higher in first-world countries, which might suggest something about how our lifestyles have changed faster than evolution can keep up. We are experiencing stress and anxiety at a whole new level, what with this new-fangled concept of “long-term goals” and “delayed gratification.” I’m certainly in no position to pooh-pooh lengthy endeavors, given that I’m committed to another decade of education and training, but I do sometimes wonder if perhaps people in third-world countries – living closer to that hunter-gatherer lifestyle – are better adapted to their types of daily stress. There is evidence related to other mental illnesses, too; for instance, schizophrenic people have a much better prognosis in rural India, likely due to the familial and social structures in place and the markedly decreased stigmatization compared to industrialized areas.


The strong correlation between depression and anxiety is also worth touching on. At first glance, it may seem counterintuitive that these two are related, since they appear confusingly contradictory, but it turns out that many people experience both and that one predisposes you to the other. Without going into too many details, suffice to say that once you look at the neurotransmitters involved in states of arousal (the nervous energy kind, but also the sexual kind) and depression, it actually makes a lot of sense. There have been some particularly interesting recent studies with medical interns and residents (generally stressed, anxious people) about the differences in rates of depression between men and women. While about 20% of men developed depression, a full 40% of women did. Further, having a child during internship increased the likelihood of depression for women but had no effect of risk for men. More broadly, people who attended medical school outside of the United States were significantly less depressed. I’m not even going to try to go into that – way too many confounders and way too much opportunity to say things that are far from politically correct – but you can use your imagination as to why that might be.


There are some promising directions for future depression treatment research and it’s these questions we should be addressing, not the silly ones about whether it’s a major problem or not. Given what we see in patients with both depression and immune dysregulation, is it likely that cytokines play a role in disease onset? Would anti-inflammatory drugs alter depressive symptoms?  Since we see so much depression after strokes and heart attacks, should we investigate other cardiovascular factors? As about 40% of variance in depression can be attributed to genetics, should we devote more effort to searching for specific genetic polymorphisms? How useful is functional magnetic resonance imaging (fMRI) in studying the changes in activity and connectivity of depressed patients? Is it relevant that depressed people have smaller hippocampi and that their amygdalae are more strongly connected to their prefrontal cortices?

Scientist or not, I hope you learned something today…or, at the very least, revisited and reflected on important issues. If you take away anything at all, let it be this: depression is real, chronic, and treatable. Some of the most incredible people I know have dealt with it and mental illness should never, EVER be stigmatized. Let’s get on with it, progress with research, and talk about it openly.


Monday, October 15, 2012

Which came first: leech the doctor or leech the worm?


Comparative philology is a great combination of linguistics and history, so let’s do a little bit. We might wonder why leech used to be a word for the doctor and at the same time for the worm used by the doctor. Obviously, this begs the question of which came first: leech the doctor or leech the worm?
 

Dr. Lewis Thomas knew how to go about answering questions the right way: “The evolution of language can be compared to the biological evolution of species, depending on how far you are willing to stretch analogies. The first and deepest question is open and unanswerable in both cases: how did life start up at its very beginning? What was the very first human speech like?”

He was really THE man, Thomas was.

Anyhow, let’s get back to language and species. Since fossils exist for both, we can trace back to near the beginning. Prokaryotes were the earliest forms of life and they left prints in rocks dating back to 3.5 billion years…actually, on second thought, let’s skip some of the biology.

Of course, language fossils are more recent, but they can also be scrutinized and theororized (fine, I know it’s theorized). Words are fascinating and they can most likely be rooted back about 20,000 years to Indo-European ancestors of Sanskrit, Greek, Latin, and Germanic tongues, among others. Take our example of the two types of leeches and you see why it’s not all clear on the surface. Leech the doctor goes back to the word “leg,” meaning “to collect” and with derivatives meaning “to speak.” Implications of knowledge started to follow the word, as it became “laece” in Old English, “lake” in Middle Dutch, and “lekjaz” in early German. Leech the worm is classified by the Oxford English Dictionary as being “lyce” in the tenth century, then “laece” a bit later, and “leech” in the Middle Ages. At some point, leech the doctor and leech the worm (and, incidentally, leech the tax collector) fused into the same general idea of collecting (blood, fees, etc.).

Other medically related words also have interesting stories. “Doctor” comes from “dek,” meaning proper, acceptable and useful. In Latin, this became “docere” – to teach – and “discere” – to learn and disciple.


The word “medicine” came about from the root “med,” used to refer to measuring things out. In Latin, “mederi” meant “look after and heal.” The same root gave rise to words such as “moderate” and “modest,” perhaps good words for current students and doctors to reflect on.

Admittedly, most discussions of medical etymology aren’t this mature. We learn about “cardiac tamponade” (where fluid accumulates in the sac surrounding the heart) and then realize that “tampon” is French for “plug.” We see images of the “tunica intima” (which lines arteries and veins) and whisper about how it’s like an intimate tunic. I could talk about words all day, every day. For now, over and out!

Wednesday, October 10, 2012

The Game of Life


Since my first post, I realize that I’ve strayed from interpreting and elaborating on scientific studies. It turns out that I easily get distracted and think about a lot of not-exactly-hard-and-basic-science things. It happens. As part of clinical training in medical school last week, I was particularly touched by an exercise on coping with death. (I wrote about some other favorite clinical experiences here.)

Mentally, I keep coming back to this exercise I did on end-of-life priorities and choices. It’s well-known and perhaps some of you have done it: I was given eight cards for each of three categories (possessions, goals, and people), making a grand total of 24 cards, and I was told to write out specific possessions, goals, and people that were important to me. The “game” consisted of me imagining my own disease progression and at every stage I was forced to give up a card from each category. What disturbed me most about the whole exercise was my ability to do it. When I think about that hour, I remain deeply unsettled about how decisive I was about priorities. Worse still, it didn’t take long for me to develop a sort of ranking for specific relatives and friends; there were real names on those cards. From the outset, I found myself wanting to cheat and use extra cards for the people pile. Seriously, I got way too into this game. It’s amazing what you start thinking about when you’re able to immerse yourself in some serious pretend.

They are just things

Maybe I bent the rules a little bit by counting my dog as a possession (I mean, I was already way out of people cards) and by counting a family cabin as a possession (though it’s in some sort of trust I don’t completely understand). Each cycle, I tore the possession card in half first. At the beginning, the Swiss dishes and running shoes went, then my first white coat, and later still photographs and scrapbooks. The last two were my dog and the cabin in Maine.




Hopes and dreams

Life goals are sometimes easy to list but rarely easy to rank. What was interesting is how easy it was to tear up the cards with the goals like “visit Japan” (which I really, really want to do) and “restart piano” (I have perennial good intentions). A while later I found myself making harder decisions about community contribution and professional development, yet still found myself reaching to rip those cards long before the last couple: “marriage” and “children.” Funny how clear things become when you get right down to it.



I’m not sure how I feel about this people situation

I have no qualms about publicly stating that names of friends were torn before the names of family members. At that stage of the game, I was still able to feel calm and rational and justified. When I got down to my last four people (father, mother, sister, brother), I realized how good I was at playing pretend. Tears came to my eyes as I lifted the last few cards off the table. I’m already trying to forget about the fact that I was forced to have an order to those last cards. 

The point of the exercise was to help develop a sense of what it might feel like to be a patient with a terminal diagnosis. Maybe you are unable to keep all of your possessions as you age. As your energy wanes, you give up certain goals. You decide who you want with you at your deathbed. More broadly, though, we should all sometimes think about our priorities. If anything, we will be reminded of how our true values are not necessarily reflected by the way we spend our time. In large part, this is due to the difference between immediate needs and long-term priorities, but it still gives me pause. Sometimes the pause is rather long, I admit, given the deeply personal and critical nature of these questions. Don’t think about it too much – you need to live a little, too – but give it some consideration now and again.

Tuesday, October 2, 2012

M@M not M&M’s


I study medicine at Michigan (M@M) but I rarely eat M&M’s. I prefer the real, legit, dark stuff. This weekend, there was no Death by Chocolate (the best event ever invented by Pomona College), but rather Life by Chocolate. I probably would not have made it through the exam without it; plus, I am mildly amused by the sheer quantity of chocolate that was consumed while studying the innervation of muscles and their contraction action.



Before we get any further, you should know that this post is largely plagiarized (depending on what you mean by plagiarism) from the blog I kept while in Switzerland. Let’s skip past some of the culture, economics, and history, though, and get down to the science of chocolate. If you want a more complete story, ask me some time about chocolate’s relationships to religious ceremonies, currency exchanges, and gender discrimination.





As a long-standing chocolate connoisseur (read: addict), I knew a lot about chocolate production and chemistry before moving to Switzerland last year. The Swiss have the highest per capita rate of chocolate consumption in the entire world (11.6 kg, or 25.6 lbs., per capita per annum) and thus did nothing to moderate my addiction.

The fabrication process begins with the manual harvesting of cocoa beans from the tropical evergreen Cocoa trees, which grow in the wet lowland tropics of Central and South America, West Africa and Southeast Asia. Cocoa beans grow in pods the size of a football, sprouting off of the trunk and branches of cocoa trees. When the pods are orange and ripe, harvesters trek through the orchards with machetes and hack the pods off of the trees before splitting them and allowing the bean fermentation to start in the sun.


I like that cocoa butter melts at 35 degrees Celsius, our internal body temperature. But why – WHY?!? – does chocolate makes you so happy? Yes, it tastes good, but people have known for centuries that it is also a drug. No, seriously, we’re talking addictive. The researcher Drewnowski of the University of Michigan explained in the 1990s that we want chocolate in times of stress, anxiety, and pain because “chocolate is a natural analgesic, or pain killer.” The over 300 chemicals that compose chocolate have numerous and varied effects on our bodies through the nervous system, since it causes the release of specific neurotransmitters. The tryptophan in chocolate stimulates the release of serotonin, known as an anti-depressant (which is why we treat some depression with selective serotonin reuptake inhibitors, SSRIs), as well as endorphins, which lessen pain and decrease stress. Serotonin is a neurotransmitter, meaning that it transmits nerve impulses across the synapse between neurons. It turns out that serotonin is biochemically derived from tryptophan, which makes sense when you look at the structures.


One of the more unique neurotransmitters released by chocolate is phenylethylamine, a “chocolate amphetamine” that causes changes in blood pressure and blood-sugar levels that lead to excitement and alertness. Phenylethylamine is known as the “love drug” because it causes your heart rate to quicken. Another interesting compound found in chocolate is the lipid anandamide, which resembles the tetrahydrocannabinol (THC) found in marijuana. Like marijuana, chocolate activates the receptor which causes the production of dopamine, leading to the feelings of well-being associated with a high.





Also, I just came across an article with tips on how to curb your chocolate cravings and addiction (which I have no intention of doing). Here are the steps, though, if you are interested:

Discover if the craving is emotional. (Mine sometimes is, and I don’t care because a little chocolate does indeed help.)

Incorporate small portions of chocolate into your usual diet, rather than restrict yourself. (That’s nice. I’d rather incorporate large portions into my diet…like, say, between every meal, every day.) 

If you are feeling bored and craving chocolate, go for a walk, run errands, call a friend or read a book. (Well, yeah, sure…or read your book while you eat the chocolate.)

Make sure you always have healthy food nearby, so you can replace chocolate with fruit a few times a day. (Fruit goes well with chocolate. Don’t be stupid.)

If you think it’s necessary, do not allow chocolate in the house. (Now that’s just crazy talk.)

Finally, it is a good idea to increase your level of exercise, since exercise also releases endorphins, which counteracts stress, anxiety and depression. (Oh, good. I’m addicted to exercise, too. The lid of the tin below is hanging on my fridge.)