Theranos and the Allure of Numbers-Based Medicine

The recent announcement that Theranos CEO Elizabeth Holmes has been banned from the blood-testing industry for two years is the latest chapter in the company’s rise and fall, a cautionary tale in what can happen when media hype and millions of dollars in investment funds collide with the revolutionary but untested claims of a driven, dynamic founder.

Until Theranos came under scrutiny from federal regulators, much of the laudatory press coverage focused on the company’s origin story—the turtleneck-clad Stanford dropout who idolized Steve Jobs and wanted to change the world through technology. Holmes landed on the covers of Fortune, Forbes, Inc. and T: The New York Times Style Magazine, and the New Yorker and Wired published lengthy profiles. At its peak, Theranos was valued at $9 billion, making Holmes the youngest self-made female billionaire in the world, at the helm of an enterprise whose board was packed with luminaries including Henry Kissinger and former Secretary of State George Shultz.

Holmes claimed her company had developed a process that would upend American medicine by allowing dozens of laboratory tests to be run off a few drops of blood at a fraction of the cost of traditional methods. But whether its technology actually works is still an open question, as Holmes has never allowed it to be examined by outside researchers, nor its data to be peer-reviewed. Last fall, the Wall Street Journal reported that Theranos’s proprietary Edison machines were inaccurate and it had been running tests on the same equipment used by established labs such as Quest Diagnostics and Laboratory Corporation of America. This set in motion a spate of bad news for the startup: investigations by the Center for Medicare and Medicaid Services, Securities and Exchange Commission and U.S. Department of Justice; the cancellation of an agreement with Walgreens to open blood-testing centers in pharmacies nationwide; the voiding of two years of Theranos blood results; and class-action lawsuits from consumers who say their health was compromised by faulty data. (For an excellent summary of the company’s rise and fall, check out this graphic from NPR.)

The excitement over Theranos was based on its claim of proprietary technology that, if real, had the potential to revolutionize lab testing and the healthcare decisions that are based on it. But at the core of its vision was a less sensational though equally central premise: that direct-to-consumer blood testing is the future of American healthcare. As Holmes put it in a 2014 TEDMED talk, enabling consumers to test themselves for diseases before showing any symptoms would “redefine the paradigm of diagnosis.” By determining their risk for a condition before developing it, people could begin treatment at an earlier stage. Take, for example, Type 2 diabetes, which Holmes says drives 20 percent of our healthcare costs and can be reversed through lifestyle changes: 80 million Americans have a condition called prediabetes, and most of them don’t know because it generally produces no symptoms—no headache, no muscle pains, no nausea or fever or chills—and is detectable only through a blood test.

The removal of the subjective experiences of the patient from the act of diagnosis has been a part of medical practice since the mid-1800s, when the modern stethoscope made it possible to observe the internal workings of the body in a non-invasive way. By the beginning of the twentieth century, an assortment of new instruments gave doctors access to technical information that patients could neither see nor interpret. The laryngoscope and electrocardiograph offered data independent of an individual’s perceptions, while a new device to measure blood pressure found its place in the doctor’s medical bag. Hemocytometers and hemoglobinometers enabled microscopic examination of the size and number of blood cells, allowing hematologists, as these specialists became known, to read the blood and manipulate it to treat various disorders. Together, these instruments reduced the physician’s reliance on a patient’s subjective description of symptoms in favor of precise, quantifiable data.

As diagnostic technologies have grown more sophisticated, a number of symptomless conditions have appeared that didn’t previously exist. Many of these are defined by deviation from a numerical threshold: high blood pressure, for instance, or prediabetes. But as physician and historian Jeremy A. Greene has written, these numbers can change due to shifting medical opinion or adjustment by pharmaceutical companies, which have an incentive to make the population of patients who are candidates for their drugs as large as possible. When the American Diabetes Association lowered the threshold for prediabetes in 2003, the population of prediabetics instantaneously expanded. No one’s health changed that quickly, just our definition of which patients had a condition and who should take medication for it.

The assumptions underlying a medicine-by-numbers approach are that disease is detectable with diagnostic instruments before the onset of experiential symptoms, and more data is always better. Our blood does contain an immense amount of crucial information about our well-being, from levels of vitamin D and electrolytes to the presence of bacteria and antibodies. But as the history of blood testing shows, the idea of blood as an infallible roadmap to one’s health, a substance that with the proper analysis will inevitably reveal incipient disease, has not always held up. More data is not always more useful, especially if we lack the tools to understand it or if the medical meaning of the information is in flux. Three separate readings of the CA 15-3 biomarker for breast cancer may look nearly identical to a physician, writes Eleftherios P. Diamandis of the University of Toronto, but in a patient could prompt a range of reactions from anxiety to jubilation, depending on where the numbers fall as predictors of cancer recurrence.

Defining diseases solely by numerical thresholds invites the possibility that these numbers could be manipulated, and with them the boundary between health and disease. Today’s normal cholesterol might be tomorrow’s borderline hyperlipidemia. Numbers-based medicine may hold enormous appeal in its apparent ability to translate the opacity of blood into quantifiable data, but treating every out-of-range figure as a marker of proto-disease is no guarantee that we’ll end up any healthier. We may just end up with more information.

 

Sources:

Eleftherios P. Diamandis, “Theranos Phenomenon: Promises and Fallacies.” Clinical Chemistry and Laboratory Medicine 53, 7 (June 2015).

Jeremy A. Greene, Prescribing By Numbers: Drugs and the Definition of Disease. Johns Hopkins University Press, 2006.

Keith Wailoo, Drawing Blood: Technology and Disease Identity in Twentieth-Century America. Johns Hopkins University Press, 1999.

Johnson & Johnson’s Baby Powder: Harmless Household Product or Lethal Carcinogen?

On Monday night, a jury in St. Louis awarded $72 million to the family of a woman who died of ovarian cancer after using Johnson & Johnson’s baby powder and other talcum-based products for years. The verdict came after a three-week trial in which lawyers for the plaintiff, an Alabama woman named Jacqueline Fox, argued that Johnson & Johnson had known of the dangers of talcum powder since the 1980s and concealed the risks. The corporation’s lawyers countered by saying the safety of talcum powder was supported by decades of scientific evidence and there was no direct proof of causation between its products and Fox’s cancer.

Fox used Johnson & Johnson’s baby powder and another talc-based product called Shower to Shower for 35 years. “It just became second nature, like brushing your teeth,” her son said. “It’s a household name.” The company has come under fire in recent years from consumer safety groups for the use of questionable ingredients in its products, including formaldehyde and 1,4-dioxane, both of which are considered likely carcinogens. Fox’s case was the first to reach a monetary award among some 1,200 lawsuits pending nationally against the company.

The case bears a notable resemblance to the lawsuits against the tobacco companies, with attorneys for both the plaintiff and the defendant taking a page from the playbook of their respective side. Fox’s lawyers claimed that Johnson & Johnson’s own medical consultants warned in internal documents of the risk of ovarian cancer from hygienic talc use, just as tobacco companies knew for decades that smoking caused lung cancer but sought to suppress the evidence. And the pharmaceutical giant responded as the tobacco industry did in the numerous lawsuits it faced in the 1980s and 1990s: by creating doubt about the mechanism of cancer causation and upholding the safety of its products.

I find this case uniquely disturbing because the image of Johnson & Johnson’s baby powder as a household product that evokes a sense of comfort and protection is so at odds with the jury’s finding that it caused or contributed to a case of fatal ovarian cancer. The company appears to be right in claiming that the scientific evidence is inconclusive: some studies have shown a slightly increased risk of ovarian cancer among women who use products containing talcum powder, while others have found no link. It’s important to note that until the 1970s talcum-based products contained asbestos, so people who used them before that time were exposed to a known carcinogen. Still, the research is unsettled enough that the American Cancer Society advises people who are concerned about talcum powder to avoid using it “[u]ntil more information is available.”

Without examining the trial transcripts or interviewing the jurors, it’s impossible to know for sure what factors influenced the verdict. I imagine the tobacco settlements have irrevocably changed the environment surrounding these types of lawsuits—that there’s a sizable segment of the American public which is understandably suspicious of large corporations trying to conceal research about the health risks of their products. I suspect there’s also an element of causation and blame at work here, about wanting to assign responsibility for a disease that remains, for the most part, perplexing and impenetrable. We all make choices that affect our health on a daily basis, from what kind of shampoo to use to what to eat for lunch, and we want assurance that the repercussions of the decisions we make with good intentions will be in our best interest. But as the unprecedented $72 million verdict shows, we have an immense uneasiness about the dangers lurking behind the most benign-seeming household products. And we fear that those products, rather than benefiting us, will instead do us harm.

Shooting the Moon on Cancer

During his final State of the Union address on January 12th, President Obama announced a “moonshot” to cure cancer and appointed Vice President Biden to lead it. The edict is reminiscent of Nixon’s 1971 War on Cancer, which was envisioned as an all-out effort to eradicate the disease by marshaling the kinds of resources and scientific knowledge that two years earlier had sent a man to the moon. By most measures we’re much better off now than we were four decades ago: cancer treatments have improved drastically, people are living longer after diagnosis, and mortality rates have been falling since their peak in the early 1990s. But as anyone who has been touched by cancer can attest—and in the United States, that’s nearly all of us—the war is far from over.

Biden, whose son died of brain cancer last year, outlined a plan that’s essentially twofold: to increase public and private resources in the fight against cancer, and to promote cooperation among the various individuals, organizations and institutions working in cancer research. The initiative will likely lead to increased funding for the National Institutes of Health, a prospect that has many scientists giddy with anticipation. But the complexities of cancer, which are now much clearer than in 1971, underscore the multiple challenges confronting us. On NPR, one researcher described how the same form of cancer can act differently in different people because of the immense number of genetic distinctions between us. And in the New York Times, Gina Kolata and Gardiner Harris pointed out that the moonshot reflects an outmoded view of cancer as one disease rather than hundreds, and the idea of discovering a single cure is therefore “misleading and outdated.”

Nixon’s initiative signaled an optimism in the certainty of scientific progress to combat a disease that many regarded with dread. In polls, articles, and letters, Americans at the time debated whether they’d want to be told of a cancer diagnosis and worried about being in close contact with cancer patients. The disease’s many unknowns generated fears of contracting it, desperation about the pain and debilitation associated with it, and plenty of unorthodox cures (this was, after all, the era of laetrile and Krebiozen).

Much has changed in the intervening decades, and if you’re diagnosed with a form of cancer today, you’d undoubtedly have a better prognosis than in 1971. But one aspect of cancer in American culture has not changed, and that’s the mystique surrounding the disease. Cancer is not the biggest cause of death in the US—heart disease takes top honors—but it remains the most feared. It occupies an outsize place in the landscape of health and wellness, suffering and death. As such, it demands a bold approach. Winning the “war on cancer” would necessitate breaking down the disease into types and subtypes: not cancer, but cancers; not cancer but ductal carcinoma in situ and acute myeloid leukemia and retinoblastoma. But this would dilute its power as a singular cultural force, an adversary that (the thinking goes) with a massive coordinated input of resources could be vanquished once and for all.

Photo credit: Cecil Fox, National Cancer Institute

Photo credit: Cecil Fox, National Cancer Institute

Biden’s moonshot doesn’t just reproduce an outmoded idea of cancer; it is dependent upon it. It also promotes research and therapies in search of a cure at the expense of prevention, which he fails to mention a single time. Innovative new treatments that showcase scientific advancement are flashy and exciting, unlike lifestyle recommendations around nutrition or exercise, or widespread public health efforts to reduce the presence of environmental carcinogens. There’s also the issue of how to measure progress with a goal as elusive as curing cancer. Is it in decreased incidence or mortality rates? In lowering the number of new diagnoses to zero? Perhaps the moonshot should instead focus on reducing human suffering associated with cancers by emphasizing prevention and addressing inequalities that affect health and health outcomes. It’s an objective that’s unquestionably less spectacular than curing cancer, but certainly more achievable.

 

Sources:

James T. Patterson, The Dread Disease: Cancer and Modern American Culture. Harvard University Press, 1987.

 

History on Screen: The Knick, William Halsted, and Breast Cancer Surgery

Recently I watched the first episode of The Knick, a new series on Cinemax that revolves around the goings-on at a fictitious hospital in turn-of-the-century New York. It stars Clive Owen as Dr. John Thackery, a brilliant and arrogant surgeon who treats his coworkers contemptuously but earns their grudging respect because he’s so darn good at his job. I’ve read that the show draws on the collections and expertise of Stanley Burns, who runs the Burns Archive of historical photographs. As a medical historian, I suppose it’s an occupational inevitability that I would view The Knick with an eye toward accuracy. Mercifully, I found the show’s depiction of the state of medicine and public health at the time to be largely appropriate: the overcrowded tenements, the immigrant mother with incurable tuberculosis, the post-surgical infections that physicians were powerless to treat in an age before antibiotics. I was a bit surprised by one scene in which Thackery and his colleagues operate in an open surgical theater, their sleeves rolled up and street clothes covered by sterile aprons as they dig their ungloved hands into a patient; while not strictly anachronistic, these practices were certainly on their way out in 1900. But overall, I was gratified to see that the show’s producers seem to be taking the medical history side of things seriously, even if they inject a hefty dose—or overdose—of drama.

A temperamental genius, Thackery thrives on difficult situations that call for quick thinking and improvisation. He pioneers innovative techniques, often in the midst of demanding surgeries, and invents a new type of clamp when he can’t find one to suit his needs. He is also a drug addict who patronizes opium dens and injects himself with liquid cocaine on his way to work. The character appears to be based on William Stewart Halsted, an American surgeon known for all of these qualities, right down to the drug addiction. Born in 1852 to a Puritan family from Long Island, he attended Andover and Yale, where he was an indifferent student, and the College of Physicians and Surgeons, where he excelled. After additional training in Europe, he returned to the US to begin his surgical career, first in New York City, then at Johns Hopkins Medical School. In addition to performing one of the first blood transfusions and being among the first to insist on an aseptic surgical environment, he was famously a cocaine addict, having earlier begun experimenting with the drug as an anesthetic. His colleagues covered for his erratic behavior, turning the other cheek when he arrived late for operations or missed work for days or weeks at a time. Twice he was shipped off to the nineteenth-century version of rehab, where doctors countered his cocaine addiction by dosing him with heroin. Although Halsted remained a cocaine addict all his life, he managed it well enough that by the time he died in 1922 he was considered one of the country’s preeminent surgeons and the founder of modern surgery.

Halsted pioneered another modern innovation, as well: the overtreatment of breast cancer. In the late nineteenth century, women often waited until the disease had reached an advanced stage before seeking medical treatment. As historian Robert A. Aronowitz writes, clinicians “generally estimated the size of women’s breast tumors on their initial visit as being the size of one or another bird egg.” When cancer was this far along, the prognosis was poor: more than 60 percent of patients experienced a local recurrence after surgery, according to figures compiled by Halsted.

In the 1880s, Halsted began working on a way to address these recurrences. Like his contemporaries, he assumed that cancer started as a local disease and spread outward in a logical, orderly fashion, invading the closest lymph nodes first before dispersing to outlying tissues. Recurrences were the result of a surgeon acting too conservatively by not removing enough tissue and leaving cancerous cells behind. The procedure he developed, which would become known as the Halsted radical mastectomy, removed the entire breast, underarm lymph nodes, and both chest muscles en bloc, or in one piece, without cutting into the tumor at all. Halsted claimed astonishing success with his operation, reporting in 1895 a local recurrence rate of six percent. Several years later, he compiled additional data that, while less impressive than his earlier results, still outshone what other surgeons were accomplishing with less extensive operations: 52 percent of his patients lived three years without a local or regional occurrence.

By 1915, the Halsted radical mastectomy had become the standard operation for breast cancer in all stages, early to late. Physicians in subsequent decades would push Halsted’s procedure even further, going to ever more extreme lengths in pursuit of cancerous cells. At Memorial Sloan-Kettering Hospital in New York, George T. Pack and his student, Jerome Urban, spent the 1950s promoting the superradical mastectomy, a five-hour procedure in which the surgeon removed the breast, underarm lymph nodes, chest muscles, several ribs, and part of the sternum before pulling the remaining breast over the hole in the chest wall and suturing the entire thing closed. Other surgeons performed bilateral oophorectomies on women with breast cancer, removing both ovaries in an attempt to cut off the estrogen that fed some tumors. While neither of these procedures became a widely utilized treatment for the disease, they illustrate the increasingly militarized mindset of cancer doctors who saw their mission in heroic terms and considered a woman’s state of mind following the loss of a breast, and perhaps several other body parts, to be, at best, a negligible consideration.

The Halsted radical mastectomy was on its way out by the late 1970s; within a few years, it would comprise less than five percent of breast cancer surgeries. The demise of Halsted’s eponymous operation had several causes. First, data from cancer survivors showed that the procedure was no more effective at reducing mortality than simple mastectomy, or mastectomy combined with radiation. Second, the radical mastectomy was highly disfiguring, leaving women with a deformed chest where the breast had been, hollow areas beneath the clavicle and underarm, and lymphedema, or swelling of the arm following the removal of lymph nodes. As the women’s health movement expanded in the 1970s, patients grew more vocal about insisting on less disabling treatments, such as lumpectomies and simple mastectomies.

William Stewart Halsted

William Stewart Halsted

Halsted’s life and the state of surgery, medicine and public health at the turn of the twentieth century are a rich source of material for a television series, with the built-in drama of epidemic diseases, inadequate treatments, and high mortality rates. But Halsted’s legacy is complicated. He pushed his field forward and introduced innovations, such as surgical gloves, that led to better and safer conditions for patients. But he also became the standard-bearer for an aggressive approach to breast cancer that in many cases resulted in overtreatment. The Halsted radical mastectomy undoubtedly prevented thousands of women from dying of breast cancer, but for others with small tumors or less advanced disease it was surely excessive. And hidden behind the statistics of the number of lives saved were actual women who had to live with the physical and emotional scars of a deforming surgery. The figure of the heroic doctor may still be with us, but the mutilated bodies left behind have been forgotten.


Sources:

Robert A. Aronowitz, Unnatural History: Breast Cancer and American Society. Cambridge University Press, 2007.

Barron H. Lerner, The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America. Oxford University Press, 2001.

Howard Markel, An Anatomy of Addiction: Sigmund Freud, William Halsted, and the Miracle Drug Cocaine. Pantheon, 2011.

James S. Olson, Bathsheba’s Breast: Women, Cancer & History. Johns Hopkins University Press, 2002.