Theranos and the Allure of Numbers-Based Medicine

The recent announcement that Theranos CEO Elizabeth Holmes has been banned from the blood-testing industry for two years is the latest chapter in the company’s rise and fall, a cautionary tale in what can happen when media hype and millions of dollars in investment funds collide with the revolutionary but untested claims of a driven, dynamic founder.

Until Theranos came under scrutiny from federal regulators, much of the laudatory press coverage focused on the company’s origin story—the turtleneck-clad Stanford dropout who idolized Steve Jobs and wanted to change the world through technology. Holmes landed on the covers of Fortune, Forbes, Inc. and T: The New York Times Style Magazine, and the New Yorker and Wired published lengthy profiles. At its peak, Theranos was valued at $9 billion, making Holmes the youngest self-made female billionaire in the world, at the helm of an enterprise whose board was packed with luminaries including Henry Kissinger and former Secretary of State George Shultz.

Holmes claimed her company had developed a process that would upend American medicine by allowing dozens of laboratory tests to be run off a few drops of blood at a fraction of the cost of traditional methods. But whether its technology actually works is still an open question, as Holmes has never allowed it to be examined by outside researchers, nor its data to be peer-reviewed. Last fall, the Wall Street Journal reported that Theranos’s proprietary Edison machines were inaccurate and it had been running tests on the same equipment used by established labs such as Quest Diagnostics and Laboratory Corporation of America. This set in motion a spate of bad news for the startup: investigations by the Center for Medicare and Medicaid Services, Securities and Exchange Commission and U.S. Department of Justice; the cancellation of an agreement with Walgreens to open blood-testing centers in pharmacies nationwide; the voiding of two years of Theranos blood results; and class-action lawsuits from consumers who say their health was compromised by faulty data. (For an excellent summary of the company’s rise and fall, check out this graphic from NPR.)

The excitement over Theranos was based on its claim of proprietary technology that, if real, had the potential to revolutionize lab testing and the healthcare decisions that are based on it. But at the core of its vision was a less sensational though equally central premise: that direct-to-consumer blood testing is the future of American healthcare. As Holmes put it in a 2014 TEDMED talk, enabling consumers to test themselves for diseases before showing any symptoms would “redefine the paradigm of diagnosis.” By determining their risk for a condition before developing it, people could begin treatment at an earlier stage. Take, for example, Type 2 diabetes, which Holmes says drives 20 percent of our healthcare costs and can be reversed through lifestyle changes: 80 million Americans have a condition called prediabetes, and most of them don’t know because it generally produces no symptoms—no headache, no muscle pains, no nausea or fever or chills—and is detectable only through a blood test.

The removal of the subjective experiences of the patient from the act of diagnosis has been a part of medical practice since the mid-1800s, when the modern stethoscope made it possible to observe the internal workings of the body in a non-invasive way. By the beginning of the twentieth century, an assortment of new instruments gave doctors access to technical information that patients could neither see nor interpret. The laryngoscope and electrocardiograph offered data independent of an individual’s perceptions, while a new device to measure blood pressure found its place in the doctor’s medical bag. Hemocytometers and hemoglobinometers enabled microscopic examination of the size and number of blood cells, allowing hematologists, as these specialists became known, to read the blood and manipulate it to treat various disorders. Together, these instruments reduced the physician’s reliance on a patient’s subjective description of symptoms in favor of precise, quantifiable data.

As diagnostic technologies have grown more sophisticated, a number of symptomless conditions have appeared that didn’t previously exist. Many of these are defined by deviation from a numerical threshold: high blood pressure, for instance, or prediabetes. But as physician and historian Jeremy A. Greene has written, these numbers can change due to shifting medical opinion or adjustment by pharmaceutical companies, which have an incentive to make the population of patients who are candidates for their drugs as large as possible. When the American Diabetes Association lowered the threshold for prediabetes in 2003, the population of prediabetics instantaneously expanded. No one’s health changed that quickly, just our definition of which patients had a condition and who should take medication for it.

The assumptions underlying a medicine-by-numbers approach are that disease is detectable with diagnostic instruments before the onset of experiential symptoms, and more data is always better. Our blood does contain an immense amount of crucial information about our well-being, from levels of vitamin D and electrolytes to the presence of bacteria and antibodies. But as the history of blood testing shows, the idea of blood as an infallible roadmap to one’s health, a substance that with the proper analysis will inevitably reveal incipient disease, has not always held up. More data is not always more useful, especially if we lack the tools to understand it or if the medical meaning of the information is in flux. Three separate readings of the CA 15-3 biomarker for breast cancer may look nearly identical to a physician, writes Eleftherios P. Diamandis of the University of Toronto, but in a patient could prompt a range of reactions from anxiety to jubilation, depending on where the numbers fall as predictors of cancer recurrence.

Defining diseases solely by numerical thresholds invites the possibility that these numbers could be manipulated, and with them the boundary between health and disease. Today’s normal cholesterol might be tomorrow’s borderline hyperlipidemia. Numbers-based medicine may hold enormous appeal in its apparent ability to translate the opacity of blood into quantifiable data, but treating every out-of-range figure as a marker of proto-disease is no guarantee that we’ll end up any healthier. We may just end up with more information.

 

Sources:

Eleftherios P. Diamandis, “Theranos Phenomenon: Promises and Fallacies.” Clinical Chemistry and Laboratory Medicine 53, 7 (June 2015).

Jeremy A. Greene, Prescribing By Numbers: Drugs and the Definition of Disease. Johns Hopkins University Press, 2006.

Keith Wailoo, Drawing Blood: Technology and Disease Identity in Twentieth-Century America. Johns Hopkins University Press, 1999.

Zika and Risk

As the Zika virus spreads north from Latin America, Central America and the Caribbean, the list of public health recommendations and scientific unknowns continues to grow. Zika is not new; it was first identified in 1947 in Uganda, and although scientists have found consistent evidence of antibodies in primates since then, few documented cases were reported in humans until recently. Current statistics are grim: the virus has now been confirmed in over thirty countries in the region, with hundreds, perhaps thousands of additional cases likely in the coming months as mosquito season peaks in the Northern Hemisphere.

Although Zika has been linked to a number of health issues, including fever, joint paint, and Guillain-Barré syndrome, most adults who are infected will have mild symptoms, if any, and no lasting effects. The risks for pregnant women, however, are more severe. The virus has been found to cause microcephaly, a condition in which babies are born with abnormally small heads, leading to brain damage and developmental issues. Mounting fears of a virus for which no vaccine or cure exists are prompting increasingly dire warnings from public health agencies, including the World Health Organization, which recommends that pregnant women avoid traveling to areas of ongoing Zika transmission. Officials in a number of affected countries have advised women to postpone pregnancy for a period of months or years; in El Salvador, health ministers have told women not to get pregnant until 2018.

While we know that Zika causes microcephaly, a deeper understanding of the ways in which the virus works is severely lacking. Take the following catalog of unknowns from the website of the Centers for Disease Control:

"If a pregnant woman is exposed

  • We don't know how likely she is to get Zika.

If a pregnant woman is infected

  • We don't know how the virus will affect her or her pregnancy
  • We don't know how likely it is that Zika will pass to her fetus.
  • We don't know if the fetus is infected, if the fetus will develop birth defects.
  • We don't know when in pregnancy the infection might cause harm to the fetus.
  • We don't know whether her baby will have birth defects."

While I’m by no means trying to minimize the implications of having a baby that tests positive for Zika or a child with microcephaly, I find that the uncompromising public health recommendations around the virus’s transmission are a reflection less of the absolute risk to a pregnant woman (which we lack the information to conclusively determine) than of the inadequacy of what medicine can offer in the event of infection. The anxiety surrounding the virus is understandably based in fear and uncertainty, as pregnancy is a 40-week state of perpetual uncertainty that entails a constant balancing of input versus outcome. As with alcohol and caffeine, which pregnant American women are advised to avoid entirely, there is no safe level of exposure to Zika; one must assume that a developing fetus is at risk, even if the mechanism of infection is not fully understood.

I realize that the calculation of risk will be different for women who travel to areas of active Zika transmission and those who reside there. I’m also aware that birth control and abortion are not available to most women in a number of affected countries, including Brazil, and sexual violence and coercion mean that many women are not fully in control of their sexuality. Zika may not be a new disease, but it is a newly emerging threat, and millions of women who are pregnant, thinking of becoming pregnant, or simply of childbearing age will have to weigh questions of risk and responsibility as they make essential decisions about travel and reproduction.

Football and the Risk of Concussion

I’ve been thinking a lot about risk lately. In medicine and public health, it’s an idea that’s always present, usually invoked toward the goal of disease prevention. Over the years, the ways in which the concept of risk has been put forth have changed as the major causes of mortality have shifted from infectious to chronic disease. In the eighteenth and nineteenth centuries, an epidemic of cholera or yellow fever might have been seen as a way to separate acceptable citizens from unacceptable, the latter premised on some combination of ethnicity, race, religion, class and moral principles. More recently, public health recommendations have focused on lifestyle practices that can reduce our risk of developing cancer, heart disease, and other chronic illnesses with multifactorial causes.

mouse_risk.jpg

I’m interested in how we experience risk and how this shapes the decisions we make about what to eat, where to live, the types of behaviors we engage in and the situations we’re comfortable with. How does each of us choose to respond to a series of unknowns about, for instance, the dangers of genetically modified food, the possible link between cellphone radiation and cancer, or the relationship between pesticides and hormonal imbalances? If Alzheimer’s disease runs in your family, what do you do to decrease the chances you’ll develop it? If you’re diagnosed with a precancerous condition that may or may not become invasive, do you remove the suspicious cells immediately or wait to see if they spread? What does it mean for our bodies to be constantly at risk, under threat from sources both known and unknown that we cannot see or regulate?

In the upcoming months, I’ll be exploring these ideas and more in a series of essays on risk. My premise is that the ways in which we choose to deal with risk are fundamentally about control, and are aimed at addressing the illusion that we have command over disease outcomes in a world ruled by randomness and unpredictability. Cancer screenings, lifestyle habits, and the other behaviors we adopt to stay healthy are an attempt to reduce our risk, to make the uncertain certain, to bring what’s unknown into the realm of the foreseeable. As a way of managing the future, this approach assumes a linearity of outcomes; if I engage in x behavior, then I will prevent y disease. It assumes that illness can be reduced to a series of inputs and corresponding outputs, that wellness is more than a game of chance or a spin of a roulette wheel. The boundaries of what we consider reasonable measures to embrace for the sake of our health will differ for each of us based on our individual tolerance for ambiguity and what we consider an acceptable level of risk. As I delve into an investigation of the relationship between risk and health, the underlying question I’ll be concerned is this: what level of uncertainty can each of us live with, and how does it affect our behavior?

So here goes, my first essay in an ongoing series on risk.

With the Denver Broncos’ 24-10 victory over the Carolina Panthers in Super Bowl 50, the 2015 football season came to its much-hyped conclusion. I didn’t watch the game, but I have been following closely any public health news involving the National Football League. Just days before the Super Bowl, the family of Ken Stabler, a former NFL quarterback, announced that he suffered from chronic traumatic encephalopathy (CTE), a degenerative brain disease that can trigger memory loss, erratic behavior, aggression, depression, and poor impulse control. The most prominent quarterback yet to be diagnosed, Stabler joins Junior Seau, Frank Gifford, Mike Webster, and over 100 former players found to have the disease, which is caused by repeated brain trauma and can only be determined after death by a physical examination of the brain. Retired NFL players suffer from numerous chronic injuries that affect their physical and mental well-being: in addition to the multiple concussions, there are torn ligaments, dislocated joints, and repeated broken bones that can no longer effectively be managed by cortisone injections and off-the-field treatments. Many athletes end up addicted to painkillers; some, like Seau, commit suicide or die from drug overdoses, isolated from family and friends. One particularly moving article in the New York Times profiled Willie Wood, a 79-year old former safety for the Green Bay Packers who was part of the most memorable play of Super Bowl I, yet can no longer recall that he was in the NFL. And incidents of domestic abuse against the partners and spouses of players continue to make headlines, including the unforgettable video of Ray Rice knocking his girlfriend unconscious in an elevator at an Atlantic City casino.

Despite these controversies, football remains enormously popular in the United States. Revenue for the NFL was $11 billion in 2014, and league commissioner Roger Goodell pocketed $34 million in compensation that year. The NFL has managed to spin the concussion issue in a way that paints the league as highly concerned about player safety. Goodell touts the 39 safety-related rules he has implemented during his tenure, and the settlement last fall in a class-action lawsuit brought by former players set up a compensation fund to cover certain medical expenses for retired athletes (although some criticized the deal because it doesn’t address symptoms of CTE in those who are still alive). Increasing awareness of the danger of concussions has prompted discussions about how to make the game safer for young athletes. One approach that’s been floated is to have players scrimmage and run drills without helmets and protective padding, forcing them to treat each other gently in practice while saving the vigorous tackles for game day. The Ivy League just agreed to eliminate full-contact hitting from all practices during the regular season, a policy that Dartmouth College adopted in 2010. And earlier this week, the NFL’s top health and safety official finally acknowledged the link between football and CTE after years of equivocating on the subject.

But controlled violence is such a central aspect of football that I wonder how much the sport and its culture can be altered without changing its underlying appeal. Would football be a profoundly different thing with the adoption of protocols that reduce the likelihood of concussions and other injuries? How much room is there for change within the game that football has become? Players continue to get bigger and stronger, putting up impressive stats at younger and younger ages. My friend’s nephew, a standout high school player in Texas and a Division I college prospect, was 6’2” and weighed 220 pounds when he reported as a freshman for pre-season training—numbers that I imagine will only expand as he continues to train, and ones that players around him will have to match in order to remain competitive.

With mounting knowledge of the link between football and degenerative brain disease, I’m interested in the level of risk that’s acceptable in a sport where serious acute and chronic injuries are increasingly the norm. In a recent CNN town hall, Florida senator Marco Rubio asserted that football teaches kids important life lessons about teamwork and fair play, and pointed out that there are risks inherent in plenty of activities we engage in, such as driving a car. True enough, but driving a car is an essential part of daily life for many of us, which means that we have little choice but to assume the associated risks. Football is voluntary. I realize that for some, football is less than completely voluntary, from children who face parental pressure to professional athletes who feel compelled to remain in the game because they’re supporting families or facing limited options outside the sport. Still, playing football is an acquired risk to a greater degree than driving or riding in a car, the dominant form of transportation in our suburbanized communities. And if risk reduction is about attempting to control for uncertainty, then the accumulating evidence about CTE and other severe injuries is sure to change the calculus of how parents and players assess participation in a sport where lifelong mental and physical disabilities are not just possible, but probable.