The Technologic Fallacy

pasted-image.tiff

My family and I recently saw the new Oppenheimer movie. The scenes that struck me the most were of the crowds cheering after the Trinity test. How on earth could people have been ecstatic about the creation of the most destructive human-made force in history? But of course they were. I get that they were excited about helping to end the war. And being scientists, they must have seen it as a successful experiment. But I think there was more to it. They had a sense of optimism, a sense that this was the door into a new technologic era of human progress. How else can you explain all the atomic iconography of the 1950’s? Atomic drive-ins, the Seattle Space  Needle, the UFO shaped LAX terminal, the Jetsons. The Atomic Age was the Space Age and vice versa, and it was all progress.

Humans have always been predisposed to technologic optimism. 

Medicine is no different. Particularly American medicine. Why do Americans turn up their noses at the much more cost-effective, primary-care-centric health systems of other high income countries? Partly out of fear of socialism, sure, but also because they believe that Americans have more access to new technologies, and that technology is what cures disease.

We Americans LOVE medical technology. Genomics. Personalised medicine. Proton beam and gamma knife therapy (atomic!). Even Telehealth, which is basically just talking with your doctor or therapist, is framed as a technology. The latest object of adoration is artificial intelligence LLMs (Large Language Models), which have been used for everything from taking board exams to teaching patients about their glaucoma drops to training neurology residents. And what’s even better than an AI assistant for a busy primary care doctor? An AI assistant on their mobile phone

Which brings me to the podcast I was listening to the other day. Russ Roberts of EconTalk interviewed MIT’s Daron Acemoglu, one of the biggest names in academic economics right now. The question was the role of technology in improving people’s standard of living. Roberts argued that technology is beneficial to humanity, full stop. Acemoglu countered that technology can be a powerful tool for improving standard of living *provided that* it’s actually used to improve people’s standard of living. And that latter point requires institutions to both protect humans from technology’s harms, and to distribute the benefits fairly. Much of the podcast discussion centered on the industrial revolution in Britain, which brought enormous efficiencies in manufacturing, but also incredibly unhealthy conditions for urban workers (many of whom were forced into cities and factories when the machines took away their rural livelihoods). Life expectancy fell below 30 years. And the economic benefits? Largely concentrated in the hands of the 1%, or maybe closer to the 1% of the 1% (sound familiar?). Acemoglu went on to discuss the role of England’s social progressive movements, including the Chartists and the rise of early labor unions, in starting to rebalance these social cost-benefit ratios. (Of course, it took another century for the United Kingdom’s modern welfare state, including the National Health Service, to appear on the scene. But that’s a topic for another newsletter.)

Bringing this back to the setting of healthcare: Healthcare technology per se, despite its enormous popular appeal, isn’t a justifiable national goal. (Please, stop talking about moonshots!) Healthcare’s Just Cause is national health and well-being, not sexy tech. 

I’d like to suggest a conceptual equation for health tech: B = A x E x S. Benefit = Affordability x Efficacy x Safety.

  • Benefit: Longer life and/or higher quality of life.
  • Affordability: This part of the equation is partly about access (using the lowest-income members of society as the standard). And it’s partly about opportunity costs, since every dollar spent on a new technology means a dollar not available for other healthcare services (including services from low-tech, high value providers like social workers and clinical pharmacists).
  • Efficacy: Most technologies don’t live up to their hype (remember Segways? Or supersonic passenger jets? How about oral hormone replacement therapy for postmenopausal women?). Which is all the more reason to withhold spending a fortune until there’s a good decade or so of data to prove the overall benefits. And why I’m disappointed at CMS’s decision to cover 80% of the cost of the latest Alzheimer’s therapy.
  • Safety: The risks and side effects of technologies sometimes take years to discover and quantify. Despite FDA’s well-earned reputation for caution, dangerous technologies can slip through even the regulatory safety screening net. Such as Depuy’s metal-on-metal hip joints, or Merck’s Vioxx analgesic. Which is why we need transparent registries and safety monitoring systems with real accountability, and why we can’t afford to leave these steps in the hands of the companies who sell the products. The Wall St Journal reported on a fascinating study recently, where the speed of initiating medical device recalls following safety concerns was found to be inversely related to the size of the CEO’s ownership stake in the company. In the case of new category technologies such as AI, the safety issues are much more complex, and merit larger and longer discussion. What we definitely don’t need is for the safety rules on new-category tech to be driven mainly by industry. Which is why I worry about the National Academy of Medicine’s recently announced committee to design an AI code of conduct, and for which most of the members represent either a tech company or a large health system that wants to be a tech company. 

Bottom line: healthcare technology has lots of potential to improve human well-being. But to actually achieve net benefit, it has to be appropriately harnessed and regulated. Otherwise we end up cancelling out the benefits through introduction of new risks and diversion of resources,  while enriching the few at the expense of the many.

Here are a few relevant resources if this topic interested you and you’d like to explore this topic further:

Technology Fallacy book 

Econtalk episode

Sam Altman Orb article 

How AI can increase burnout in medicine 

Subscribe to Hippocratic Capitalism

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe