Theranos and the Allure of Numbers-Based Medicine

The recent announcement that Theranos CEO Elizabeth Holmes has been banned from the blood-testing industry for two years is the latest chapter in the company’s rise and fall, a cautionary tale in what can happen when media hype and millions of dollars in investment funds collide with the revolutionary but untested claims of a driven, dynamic founder.

Until Theranos came under scrutiny from federal regulators, much of the laudatory press coverage focused on the company’s origin story—the turtleneck-clad Stanford dropout who idolized Steve Jobs and wanted to change the world through technology. Holmes landed on the covers of Fortune, Forbes, Inc. and T: The New York Times Style Magazine, and the New Yorker and Wired published lengthy profiles. At its peak, Theranos was valued at $9 billion, making Holmes the youngest self-made female billionaire in the world, at the helm of an enterprise whose board was packed with luminaries including Henry Kissinger and former Secretary of State George Shultz.

Holmes claimed her company had developed a process that would upend American medicine by allowing dozens of laboratory tests to be run off a few drops of blood at a fraction of the cost of traditional methods. But whether its technology actually works is still an open question, as Holmes has never allowed it to be examined by outside researchers, nor its data to be peer-reviewed. Last fall, the Wall Street Journal reported that Theranos’s proprietary Edison machines were inaccurate and it had been running tests on the same equipment used by established labs such as Quest Diagnostics and Laboratory Corporation of America. This set in motion a spate of bad news for the startup: investigations by the Center for Medicare and Medicaid Services, Securities and Exchange Commission and U.S. Department of Justice; the cancellation of an agreement with Walgreens to open blood-testing centers in pharmacies nationwide; the voiding of two years of Theranos blood results; and class-action lawsuits from consumers who say their health was compromised by faulty data. (For an excellent summary of the company’s rise and fall, check out this graphic from NPR.)

The excitement over Theranos was based on its claim of proprietary technology that, if real, had the potential to revolutionize lab testing and the healthcare decisions that are based on it. But at the core of its vision was a less sensational though equally central premise: that direct-to-consumer blood testing is the future of American healthcare. As Holmes put it in a 2014 TEDMED talk, enabling consumers to test themselves for diseases before showing any symptoms would “redefine the paradigm of diagnosis.” By determining their risk for a condition before developing it, people could begin treatment at an earlier stage. Take, for example, Type 2 diabetes, which Holmes says drives 20 percent of our healthcare costs and can be reversed through lifestyle changes: 80 million Americans have a condition called prediabetes, and most of them don’t know because it generally produces no symptoms—no headache, no muscle pains, no nausea or fever or chills—and is detectable only through a blood test.

The removal of the subjective experiences of the patient from the act of diagnosis has been a part of medical practice since the mid-1800s, when the modern stethoscope made it possible to observe the internal workings of the body in a non-invasive way. By the beginning of the twentieth century, an assortment of new instruments gave doctors access to technical information that patients could neither see nor interpret. The laryngoscope and electrocardiograph offered data independent of an individual’s perceptions, while a new device to measure blood pressure found its place in the doctor’s medical bag. Hemocytometers and hemoglobinometers enabled microscopic examination of the size and number of blood cells, allowing hematologists, as these specialists became known, to read the blood and manipulate it to treat various disorders. Together, these instruments reduced the physician’s reliance on a patient’s subjective description of symptoms in favor of precise, quantifiable data.

As diagnostic technologies have grown more sophisticated, a number of symptomless conditions have appeared that didn’t previously exist. Many of these are defined by deviation from a numerical threshold: high blood pressure, for instance, or prediabetes. But as physician and historian Jeremy A. Greene has written, these numbers can change due to shifting medical opinion or adjustment by pharmaceutical companies, which have an incentive to make the population of patients who are candidates for their drugs as large as possible. When the American Diabetes Association lowered the threshold for prediabetes in 2003, the population of prediabetics instantaneously expanded. No one’s health changed that quickly, just our definition of which patients had a condition and who should take medication for it.

The assumptions underlying a medicine-by-numbers approach are that disease is detectable with diagnostic instruments before the onset of experiential symptoms, and more data is always better. Our blood does contain an immense amount of crucial information about our well-being, from levels of vitamin D and electrolytes to the presence of bacteria and antibodies. But as the history of blood testing shows, the idea of blood as an infallible roadmap to one’s health, a substance that with the proper analysis will inevitably reveal incipient disease, has not always held up. More data is not always more useful, especially if we lack the tools to understand it or if the medical meaning of the information is in flux. Three separate readings of the CA 15-3 biomarker for breast cancer may look nearly identical to a physician, writes Eleftherios P. Diamandis of the University of Toronto, but in a patient could prompt a range of reactions from anxiety to jubilation, depending on where the numbers fall as predictors of cancer recurrence.

Defining diseases solely by numerical thresholds invites the possibility that these numbers could be manipulated, and with them the boundary between health and disease. Today’s normal cholesterol might be tomorrow’s borderline hyperlipidemia. Numbers-based medicine may hold enormous appeal in its apparent ability to translate the opacity of blood into quantifiable data, but treating every out-of-range figure as a marker of proto-disease is no guarantee that we’ll end up any healthier. We may just end up with more information.

 

Sources:

Eleftherios P. Diamandis, “Theranos Phenomenon: Promises and Fallacies.” Clinical Chemistry and Laboratory Medicine 53, 7 (June 2015).

Jeremy A. Greene, Prescribing By Numbers: Drugs and the Definition of Disease. Johns Hopkins University Press, 2006.

Keith Wailoo, Drawing Blood: Technology and Disease Identity in Twentieth-Century America. Johns Hopkins University Press, 1999.

Pathologizing Childhood Behavior

Several weeks ago the New York Times published a disturbing front-page story on the use of psychiatric medications in very young children. The article, by Alan Schwarz, describes a sharp uptick in the number of prescriptions for antipsychotics and antidepressants to address violent or withdrawn behavior in children under the age of two. I’ve written on Schwarz’s superb prior reporting on the increasing prevalence of psychiatric diagnoses in children and the aggressive role of pharmaceutical companies in promoting medications to treat them. But his latest work reveals an alarming new trend in addressing behavioral disorders in children, encapsulating much of what’s wrong with the American healthcare system and our contemporary attitudes toward illness.

The risks of using psychiatric medications such as Haldol and Prozac on neurologically developing brains are not known, because the experiments have never been done in children—and won’t be, for ethical reasons. In adults, antipsychotics are generally used to treat symptoms of schizophrenia and can have long-term, debilitating side effects. These range from feelings of numbness and a lack of emotion to a condition called tardive dyskinesia, which is characterized by involuntary, repetitive movements, usually facial twitching, and is often permanent and irreversible.

While children as young as eighteen months or two years are obviously not ideal candidates for cognitive behavioral therapy, which can be extremely effective in addressing behavioral disorders in adults, there are still ways to attend to the underlying issues and attempt to determine what's causing them. As one of the experts quoted in the article notes, however, this takes time and money at all levels, as well as patience. The system of health insurance reimbursement in the United States favors shorter physician visits over longer ones, making it faster and thus more profitable to write a prescription than to address a patient’s issues in a lengthier, more wide-ranging way. It’s far easier to medicate away a symptom than it is to address its source, especially for overworked, stressed-out parents and for physicians who are not necessarily rewarded financially for emphasizing a social rather than a biomedical approach to the treatment of behavioral disorders.

Finally, there’s the idea that physicians are more likely to prescribe something for a particular condition if a medication to address its symptoms is readily available. This makes intuitive sense: if a patient has high blood pressure or high cholesterol, then prescribing an antihypertensive or a statin would presumably follow. Similarly, a person with signs of depression might receive a prescription for an antidepressant, as someone who suffers from migraine headaches could benefit from a drug that addresses the condition’s multiple symptoms. But the very existence of a medication to treat an illness can contribute to perceptions of that illness’s prevalence. In some instances, medication can create illness; in others, it can make it more visible. Take, for example, menopause and erectile dysfunction. Until recently, both were considered ordinary consequences of aging. Then hormone replacement therapy and Viagra emerged as pharmaceutical remedies for each condition, medicalizing them and rendering them abnormal. (Recommendations for hormone replacement therapy in post-menopausal women changed abruptly in 2002 when the Women’s Health Initiative study found that the standard regimen increased a woman’s risk of heart disease and breast cancer.) And what’s abnormal must be made normal, whether the deviation is physiological, hormonal, or numerical. But behavioral disorders are harder to define, and therefore the threshold of who needs treatment will vary.

I’m not suggesting that doctors stop prescribing psychiatric medications to children altogether, as experts agree that antianxiety drugs such as Klonopin are an appropriate way to treat seizures in young patients; although the long-term side effects are unknown, the consequences of leaving the seizures untreated are even worse. But it’s the increasing use of these medications for an ever-expanding list of behavioral disorders that’s of concern, both in what it indicates about our contracting sense of normal childhood conduct and in the reluctance of physicians to take a more expansive approach to addressing it. We should embrace a broader, more forgiving view of what it means to be a child and work to ensure that our healthcare system considers psychiatric care in a comprehensive way. Utilizing counseling and social support instead of instinctively reaching for a prescription pad may be a more time-consuming and expensive way to treat behavioral disorders, but it's one that involves fewer long-term and unknown risks to very young brains.

 

 

What's In a Name?

sars2.jpg

Last week, the World Health Organization issued guidelines for naming new human infectious diseases. Concerned about the potential for disease names to negatively impact regions, economies, and people, the organization urged those who report on emerging diseases to adopt designations that are “scientifically sound and socially acceptable.” “This may seem like a trivial issue to some,” said Dr. Keiji Fukuda, Assistant Director-General for Health Security,” but disease names really do matter to the people who are directly affected. We’ve seen certain disease names provoke a backlash against members of particular religious or ethnic communities, create unjustified barriers to travel, commerce and trade, and trigger needless slaughtering of food animals. This can have serious consequences for peoples’ lives and livelihoods.”

According to the new guidelines, the following should be avoided: geographic locations (Lyme disease, Middle East Respiratory Syndrome, Rocky Mountain Spotted Fever, Spanish influenza, Japanese encephalitis); people’s names (Creutzfeldt-Jakob disease, Lou Gehrig’s disease, Alzheimer’s); animal species (swine flu, monkeypox); references to an industry or occupation (Legionnaires’ disease); and terms that incite undue fear (fatal, unknown, epidemic).

Instead, the WHO recommends generic descriptions based on the primary symptoms (respiratory disease, neurologic syndrome, watery diarrhea); affected groups (infant, juvenile, adult); seasonality (winter, summer); the name of the pathogen, if known (influenza, salmonella); and an “arbitrary identifier” (alpha, beta, a, b, I, II, III, 1, 2, 3).

Stigmatization caused by disease names is a legitimate concern, as we’ve seen that the way in which an appellation is chosen can have very real consequences for a community. It can alter perceptions of who is susceptible, which in turn can affect how doctors make their diagnoses and devise plans for treatment. It can shape social attitudes toward both patients and those who remain disease-free, and it can influence decisions about research and funding. When AIDS first emerged in the United States in the early 1980s, it was named GRID, or Gay Related Immune Deficiency, a measure of the extent to which it was associated with gay men. While gay and bisexual men remain the group most severely affected by HIV today, the disease’s original name undoubtedly shaped public perceptions of who was—and wasn’t—at risk.

But stigmatization can also happen apart from the process of naming a disease, a matter that the WHO guidelines would do nothing to address. In 2003, an outbreak of SARS (Severe Acute Respiratory Syndrome) in China, Vietnam and Hong Kong led to widespread stigmatization of Asian American communities as people avoided Chinatowns, Asian restaurants and supermarkets, and sometimes Asians themselves. The 1983 classification of Haitians as a high-risk group for HIV by the Centers for Disease Control and Prevention prompted a backlash against people of Haitian descent, and from 1991 to 1994 the US government quarantined nearly 300 HIV-positive Haitian refugees at Guantanamo Bay, Cuba. And then there are the diseases that have been renamed in an attempt to destigmatize them, although their new monikers would be considered unsuitable under the WHO guidelines. Leprosy, for example, is often referred to as Hansen’s disease, particularly in Hawaii, where the contagious, highly disfiguring illness devastated families and led to the establishment of disease settlements on the islands.

I’m not in favor of stigmatization, but as someone who studies the history and sociology of illness, I can’t help but wonder if something will be lost if the WHO’s recommendations are widely adopted. A disease name can influence its place in the public consciousness; it can simultaneously bring to mind a particular location or person and a constellation of symptoms. A single word, poetic in its succinctness, can suggest a range of images and associations—biological, psychological, political, and cultural. Would Ebola have the same resonance if it were called viral hemorrhagic fever? How much of our perception of Lou Gehrig’s disease, also known as amyotrophic lateral sclerosis, involves our knowledge of the tragic physical decline of the once formidable Yankees slugger?

There are, of course, plenty of evocative diseases that don’t contain a geographic location or person’s name: polio, for instance, or cholera. But the WHO guidelines all but guarantee that the names for emerging diseases, while scientifically accurate and non-stigmatizing, will be cumbersome, clunky designations that do little to capture the public imagination. After all, who remembers the great A(H1N1)pdm09 pandemic of 2009?

Of Placebos and Pain

As the New York Times reported last week, a recent study in the BMJ found that acetaminophen is no more effective than a placebo at treating spinal pain and osteoarthritis of the hip and knee. For those who rely on acetaminophen for pain relief, this may not come as much of a surprise. Until recently, I was one of them. Because my mother suspected I had a childhood allergy to aspirin, I didn’t take it or any other NSAIDs until several years ago, when I strained my back and decided to test her theory by dosing myself with ibuprofen. To my great relief, I didn’t die. And I was surprised to discover that unlike acetaminophen, which generally dulled but didn’t eliminate my pain, ibuprofen actually alleviated my discomfort, albeit temporarily. Perhaps my mother was wrong about my allergy, or maybe I outgrew it. Either way, my newfound ability to take NSAIDs without fear of an allergic reaction allows me to reap the benefits of a medication that can offer genuine respite from pain, rather than merely rounding out its sharp edges.

But back to the study. Just because researchers determined that acetaminophen is no more effective than a placebo in addressing certain types of pain doesn’t necessarily mean that it’s ineffective. A better-designed investigation might have added another analgesic to the mix, comparing the pain-relief capabilities of acetaminophen and a placebo not just to each other, but to one or more additional medications: say, ibuprofen, or aspirin, or naproxen. That would have enabled them to rank pain relievers on a scale of efficacy and isolate whether their results in the first study were due to the placebo effect (i.e. both acetaminophen and a placebo were effective), or to the shortcomings of acetaminophen (i.e. both acetaminophen and a placebo were useless).

painscale

In any case, what I find noteworthy is not the possibility that acetaminophen might not work, but that a placebo could be effective. One of the foremost issues with the treatment and management of pain—and a major dilemma for physicians—is the lack of an objective scale for measuring it. Pain is the most common reason for visits to the emergency room, where patients are asked to rate their pain on a scale of 0 to 10, with 0 indicating the absence of pain and 10 designating unbearable agony. Pain is always subjective, and it exists only to the extent that a patient perceives it in mind and body. This makes it both challenging and complicated to address, as the experience of pain is always personal, always cultural, and always political.

The issue of pain—and who is qualified to judge its presence and degree—unmasks the question of whose pain is believable, and therefore whose pain matters. As historian Keith Wailoo has written, approaches toward pain management disclose biases of race, gender and class: people of color are treated for pain less aggressively than whites, while women are more likely than men both to complain of pain and to have their assertions downplayed by physicians. Pharmacies in predominantly nonwhite neighborhoods are less likely to stock opioid painkillers, while doctors hesitate to prescribe pain medication for chronic diseases and remain on the lookout for patients arriving at their offices displaying “drug-seeking behavior.”

Whether the pain of women and people of color is undertreated because these groups are experiencing it differently or because doctors are inclined to interpret their perceptions in another way, it underscores the extent to which both the occurrence of pain and its treatment always occur in a social context. (In one rather unsubtle example, a researcher at the University of Texas found that Asians report lower pain levels due to their stoicism and desire to be seen as good patients.) Pain, as scholar David B. Morris has written, is never merely a biochemical process but emerges “only at the intersection of bodies, minds, and cultures.” Since pain is always subjective, then all physicians can do is to treat the patient’s perception of it. And if the mind plays such an essential role in how we perceive pain, then it can be enlisted in alleviating our suffering, whether by opioid, NSAID, acetaminophen, or placebo. If we think something can work, then we open up the possibility for its success.

Conversely, I suppose there’s a chance that the BMJ study could produce a reverse-placebo effect, in which this new evidence that acetaminophen does not relieve pain will render it ineffective if you choose to take it. If that happens, then you have my sympathy and I urge you to blame the scientists.

 

Sources:

David B. Morris, The Culture of Pain. Berkeley, CA: University of California Press, 1991.

Keith Wailoo, Pain: A Political History. Baltimore, MD: Johns Hopkins University Press, 2014.