Of Placebos and Pain

As the New York Times reported last week, a recent study in the BMJ found that acetaminophen is no more effective than a placebo at treating spinal pain and osteoarthritis of the hip and knee. For those who rely on acetaminophen for pain relief, this may not come as much of a surprise. Until recently, I was one of them. Because my mother suspected I had a childhood allergy to aspirin, I didn’t take it or any other NSAIDs until several years ago, when I strained my back and decided to test her theory by dosing myself with ibuprofen. To my great relief, I didn’t die. And I was surprised to discover that unlike acetaminophen, which generally dulled but didn’t eliminate my pain, ibuprofen actually alleviated my discomfort, albeit temporarily. Perhaps my mother was wrong about my allergy, or maybe I outgrew it. Either way, my newfound ability to take NSAIDs without fear of an allergic reaction allows me to reap the benefits of a medication that can offer genuine respite from pain, rather than merely rounding out its sharp edges.

But back to the study. Just because researchers determined that acetaminophen is no more effective than a placebo in addressing certain types of pain doesn’t necessarily mean that it’s ineffective. A better-designed investigation might have added another analgesic to the mix, comparing the pain-relief capabilities of acetaminophen and a placebo not just to each other, but to one or more additional medications: say, ibuprofen, or aspirin, or naproxen. That would have enabled them to rank pain relievers on a scale of efficacy and isolate whether their results in the first study were due to the placebo effect (i.e. both acetaminophen and a placebo were effective), or to the shortcomings of acetaminophen (i.e. both acetaminophen and a placebo were useless).

painscale

In any case, what I find noteworthy is not the possibility that acetaminophen might not work, but that a placebo could be effective. One of the foremost issues with the treatment and management of pain—and a major dilemma for physicians—is the lack of an objective scale for measuring it. Pain is the most common reason for visits to the emergency room, where patients are asked to rate their pain on a scale of 0 to 10, with 0 indicating the absence of pain and 10 designating unbearable agony. Pain is always subjective, and it exists only to the extent that a patient perceives it in mind and body. This makes it both challenging and complicated to address, as the experience of pain is always personal, always cultural, and always political.

The issue of pain—and who is qualified to judge its presence and degree—unmasks the question of whose pain is believable, and therefore whose pain matters. As historian Keith Wailoo has written, approaches toward pain management disclose biases of race, gender and class: people of color are treated for pain less aggressively than whites, while women are more likely than men both to complain of pain and to have their assertions downplayed by physicians. Pharmacies in predominantly nonwhite neighborhoods are less likely to stock opioid painkillers, while doctors hesitate to prescribe pain medication for chronic diseases and remain on the lookout for patients arriving at their offices displaying “drug-seeking behavior.”

Whether the pain of women and people of color is undertreated because these groups are experiencing it differently or because doctors are inclined to interpret their perceptions in another way, it underscores the extent to which both the occurrence of pain and its treatment always occur in a social context. (In one rather unsubtle example, a researcher at the University of Texas found that Asians report lower pain levels due to their stoicism and desire to be seen as good patients.) Pain, as scholar David B. Morris has written, is never merely a biochemical process but emerges “only at the intersection of bodies, minds, and cultures.” Since pain is always subjective, then all physicians can do is to treat the patient’s perception of it. And if the mind plays such an essential role in how we perceive pain, then it can be enlisted in alleviating our suffering, whether by opioid, NSAID, acetaminophen, or placebo. If we think something can work, then we open up the possibility for its success.

Conversely, I suppose there’s a chance that the BMJ study could produce a reverse-placebo effect, in which this new evidence that acetaminophen does not relieve pain will render it ineffective if you choose to take it. If that happens, then you have my sympathy and I urge you to blame the scientists.

 

Sources:

David B. Morris, The Culture of Pain. Berkeley, CA: University of California Press, 1991.

Keith Wailoo, Pain: A Political History. Baltimore, MD: Johns Hopkins University Press, 2014.

Our Diseases, Our Selves

Over the past few weeks, I’ve been following coverage of the Institute of Medicine’s recent recommendation of a new name and new diagnostic criteria for chronic fatigue syndrome. In a 250+ page report, the IOM, a division of the National Academy of Sciences, proposed that the disease be renamed “systemic exertion intolerance disease.” This would link it more closely with its central feature while distancing it from a designation many patients see as both demeaning and dismissive of the serious impairment that can accompany the condition. It’s a move that has been applauded by a number of advocates and researchers who study the disease, although others caution that more work is needed to develop a definitive test, as well as medications that can effectively treat it.

disease.jpg

The disorder, which is also called myalgic encephalomyelitis (ME), is characterized by persistent fatigue lasting for more than six months, muscle and joint pain, unrefreshing sleep, and post-exertional malaise. Estimates of the number of affected Americans vary widely, from 836,000 to as many as 2.5 million.* I was struck by the divergence of these numbers, as well as by the following statistic which might explain why: it goes undiagnosed in an estimated 84 to 91 percent of patients. This could be the result of physicians’ lack of familiarity with ME/CFS, doubt about the seriousness of symptoms, or a belief that the patient is making up or exaggerating the extent of the illness. But regardless of your perspective on the disease, that’s an alarming rate of underdiagnosis.

As I’ve been perusing the responses from patients and comments from the public debating the nature of the disorder, I’ve noticed that reactions to the IOM recommendations tend to fall into one of two camps. One group is sympathetic to the disease and its sufferers, urging compassion, education, and continued research; not surprisingly, this group seems to consist mainly of patients with ME/CFS, people who have friends or relatives with it, and physicians who treat them. The second group sees patients as malingerers who are overstating their symptoms to get special consideration; they blame our modern lifestyle for inducing widespread fatigue in our population and point to the lack of a conclusive diagnostic test as evidence that the disease doesn’t exist.

All of this brings me to the following question, which I think is relevant not just to the current discussion but to the entire enterprise of Western medicine: what makes a disease “real”? When are diseases worthy of sympathy and concern, insurance reimbursement, research money, and pharmaceutical therapies, and when are they considered to exist only within a patient’s imagination? Few people in the twenty-first century would dispute, for instance, that pneumonia, malaria, and yellow fever are caused by particular microorganisms, their presence in the body detectable through various tests of blood and fluids. But what about conditions for which we have not yet identified a specific pathology? Does the lack of a clear mechanism for the causation of a disease mean that those who are affected by it are suffering any less? Are a patient’s perceived symptoms enough for an ailment to be considered “real”?

I’m distinguishing here between “disease,” which is a pathological condition that induces a particular set of  markers of suboptimal health in an individual, and “illness,” which is the patient’s experience of that disease. Naming a disease confers legitimacy; being diagnosed with it assigns validity to a patient’s suffering, gives him a disease identity, and connects him with a community of the afflicted. And if naming a disease confers a degree of legitimacy, then outlining a biological mechanism for it bestows even more. Disorders with an identifiable pathology are “real,” while all others are suspect. But this process is subject to revision. As the history of medicine shows us, a number of conditions are now considered real that were once thought to be caused by a lack of morality and self-control, namely alcoholism and addiction. Others, including hysteria, chlorosis, neurasthenia, and homosexuality, were once classified as diseases but are now no longer recognized as such.

“Disease,” as Charles Rosenberg reminds us, “does not exist until we have agreed that it does, by perceiving, naming, and responding to it.” It always occurs within a social context and makes little sense outside of the social and cultural environment within which it is embedded. That is why, to varying degrees, what physicians are responding to is a patient’s subjective assessment of how she is experiencing disease: the level of pain, the physical disability, the fatigue, the fever, the extent to which an ailment is interfering with her life.

To say that diseases can exist independently of us is to misunderstand their fundamental nature as human concepts and social actors. They are not mere biological events, but are made legible and assigned meaning through our system of fears, morals, and values. Whether the proposed name change from chronic fatigue syndrome to systemic exertion intolerance disease will lead to greater acceptance for the disorder and those who suffer from it remains to be seen. But it's brought attention to the process of how we define and name diseases. The ways in which we explain their causation and assign responsibility and blame set forth standards for acceptable behavior and delineate the boundaries of what we consider normal. Our relationship with disease reveals how we understand ourselves as a society. All diseases are therefore both not real and real—not real in the sense that they wouldn't exist without us, and real because we have agreed that they do.

 

*By way of comparison, about 5 million Americans are currently living with Alzheimer’s and about 1.2 million have HIV.

 

Sources:

Charles E. Rosenberg and Janet Golden, eds. Framing Disease: Studies in Cultural History. New Brunswick, NJ: Rutgers University Press, 1992.

 

 

Medicating Normalcy

adderrall

When I was in elementary school in the 1970s, I was friends with a boy who was considered hyperactive, which I vaguely understood to mean that he had excess energy and was therefore not supposed to eat sugar. He was occasionally disruptive in class and often had trouble focusing on group activities. My friend seemed to be constantly in motion, bouncing up from his chair during spelling tests and sprinting through the playground at recess, unable to keep still or remain quiet for any length of time. Another classmate, a girl, was a year older than the rest of us because she had been held back to repeat a grade for academic reasons. She was “slow,” a term we used at the time to refer to someone with a cognitive developmental disability.

If these two were growing up today, there’s a good chance they would be diagnosed with an attention disorder and medicated with a drug such as Adderall or Concerta. While A.D.H.D. has been around for awhile—it’s been listed in the Diagnostic and Statistical Manual of Mental Disorders in some form since at least 1968—its incidence in children has skyrocketed over the past few decades. As Alan Schwarz reported several months ago in the New York Times, the number of children taking medication for A.D.H.D. has increased from 600,000 in 1990 to 3.5 million today, while sales of stimulants prescribed for the condition rose more than fivefold in just one decade, from $1.7 billion in 2002 to nearly $9 billion in 2012. And researchers recently identified a new form of attention disorder in young people. Called “sluggish cognitive tempo,” it’s characterized by daydreaming, lethargy, and slow mental processing. It may affect as many as two million American children and could be treated by the same medications currently used for A.D.H.D.

This apparent epidemic of behavioral disorders in children highlights the convergence of a number of factors. In the late 1990s, changes in federal guidelines allowed the direct marketing of drugs to consumers, prompting increased awareness of disordered behaviors such as those which characterize A.D.H.D. Pharmaceutical companies routinely fund research into illnesses for which they manufacture drug therapies. As Schwarz (again) found, some of the chief supporters of sluggish cognitive tempo have financial ties to Eli Lilly; the company’s drug Strattera is one of the main medications prescribed for A.D.H.D. At the same time, overworked teachers in underfunded school districts lack the capacity to give special attention to rambunctious students, and instead urge parents to medicate them to reduce conflict in the classroom. Most important, the definition of what constitutes “normal” has narrowed. Thirty years ago, my unruly friend who wanted to run around during reading time and my absentminded classmate who forgot to write her name on her tests fell toward the extremes on the spectrum of normal behavior. Today they might be diagnosed with A.D.H.D. or sluggish cognitive tempo and given medication to make them less rowdy or more focused.

Normal childhood behavior these days means paying attention in class, answering all the questions on tests, turning in homework on time, and participating in classroom activities in a non-disruptive way. Children today, in short, are expected to be compliant. There will always be those who lack the ability to conform to an ever-constricting range of what constitutes normal behavior. For families with the access and the interest, pharmaceutical companies offer drugs designed to bring these young people within a threshold of what we consider acceptable.