My Techno-Safety Optimism Manifesto

“We need to discuss what kind of society we want. Technology can accelerate us down a good path or a bad path. The question isn’t always how to engineer better software but how to ensure humans behave more ethically.”

--Dr. Yuri Quintana from Beth Israel Deaconess Medical Center, quoted in the press release of a conference he co-hosted recently on AI in medicine.

This week’s newsletter is inspired by an online rant from one of Silicon Valley’s loudest voices. (I was about to write “most influential voices” because so many people pay attention to what he says, but I just don’t want to give him the compliment.) I stumbled across this thanks to the thoughtful crew of Slate Money Podcast, two of whom rated the rant at an 8 or a 9 on the scale of 0=normal and 1=Unibomber-level unhinged. 

The rant in question is this “Techno-optimist manifesto” by Marc Andreesen, who is formerly famous for being the lead engineer on the original Netscape web browser, the one that kicked off the internet age as well as the first dot-com bubble. (History note: Andreesen was recruited by Jim Clark, who had previously founded Silicon Graphics, and before that had completed his computer science PhD at my own alma mater, the University of Utah . As documented in Michael Lewis’s truly wild ride of a book, The New New Thing.) Andreesen is currently a billionaire and a partner in Andreesen Horowitz, a major VC funder of both AI and crypt startups.

So what did Andreesen say, and what was so unhinged? He starts by claiming that all good things in the world are attributable to technology and markets. He then goes on to claim that both tech and markets have infinite and exponential potential, so that for example our near-term descendants ought to be using a million-fold as much energy as we use today. Natural limits? Doesn’t believe in them. Global warming? Not a problem, because technology (nuclear, mainly) will solve it. Limits to earth’s carrying capacity? Who cares, since “Our descendants will live in the stars.” 

It’s his next claim that really got my blood pressure up, though. Namely, that since tech solves all the world’s problems, the best thing we can do is stay out of its way. Don’t regulate it, don’t try to make it safer, because that will just slow down the pace of tech. In his manifesto section subtitled “The Enemy” (subtle!) he calls out “social responsibility,” “trust and safety,” “tech ethics,” and “risk management” (among several other ideas) as evil programs destined to make humanity worse off. It’s a simple argument, and a seductive one, but also a demonstrably stupid one.

To explain why, first consider two widely accepted technologies, household electricity and automobiles. Both are powerful, and both have dramatically and fundamentally changed the shape of the modern world. (Problematically so in the case of the automobile, but that’s another story for another time.) Both were also quite dangerous 100 years ago (electricity probably more due to house fires than electrocution) yet are considered very safe today. Now compare them to another technology Andreesen cites favorably in his screed, nuclear fission. (Specifically, Andreesen claims that global warming is only a problem because we don’t have enough nuclear plants, something that ignores the globally limited supply of uranium, let alone other physical limitations to nuclear energy. But I guess he’s just not one to worry about details.) And here’s my point: a major reason why countries like Germany have scaled back their use of nuclear power is safety. After 3 Mile Island and then Chernobyl and then Fukoshima, the public is understandably skittish about nuclear. But now imagine a world in which nuclear energy had been developed more slowly, with a safety-first approach. (Probably also a world in which the first major large scale use of nuclear power hadn’t been in the unfortunate context of a war). It’s quite possible that under the more cautious scenario, other safer, more reliable power plant designs would have been developed by now. But instead, we’re in a catch 22 where the the public isn’t willing to do nuclear, and so we don’t invest in the tech, and so the tech doesn’t get safer, so the public isn’t willing to invest in it, etc. Pretty much the opposite from household electricity and automobiles, which due to many factors including laws, enforcement of those laws, technology to improve the enforcement of those laws, plus of course standardisation and safety technologies themselves, have become so safe and foolproof that most Americans have them in their homes and garages.

I was thinking of all this today after interviewing a friend and colleague, Dr. Ryan Metcalf, for my lab industry podcast LabMind. Ryan is a blood banker, i.e. a specialist in transfusion medicine. He’s also an enthusiastic data scientist who has high optimism for the potential of AI in medicine. But in contrast to Andreesen, Ryan (like all blood bankers) is a hardcore quality-and-safety guy. In our interview, Ryan used this JAMA article to make two points. One is that when it comes to safety, use cases (when and where and for what you use the tech) matter a lot. Electricity is really safe for powering a hair dryer in a dry environment, but not powering that same hair dryer when you’re sitting in a bathtub. Likewise, a car in the hands of a sober, licensed driver is pretty safe, but give the keys to a 5 year old and it’s pretty scary. Ditto using a car to tow a skateboarder. Ryan's other point, emphasized in that same article, is that the best use cases tend to come from customers, not vendors. Vendors tend to develop apps that are impressive; customers (when they have the resources to do so) tend to develop apps that are useful. Useful beats impressive every time.

One intriguing medical use case for AI would be as a sort of quality control assistant, constantly monitoring clinical data streams and alerting human experts when anomalies are detected. Another would be using LLMs such as ChatGPT as communication assistants, editing written materials to make them more easily readable. (But note: the hype on this last article focused on knowledge, rather than readability, and thus came to the false conclusion that AI is about to replace doctors.) Both of these are quite safe use cases, where the AI doesn't need to be perfect, because it's human experts who are making the actual judgements. These use cases are very different from using AI to replace human experts.

Bottom line: Technologic advancement is most successful when there's parallel development of safety mechanisms, including legal, cultural, and technological mechanisms both to adapt to the technology, and to adapt the technology to the real world. As long as the technology's safety and reliability stay in balance with its power, society tends to see steady innovation to advance its use. But when safety and reliability lag the raw power of a technology, progress is likely to languish.

That’s why I’m a techno-safety optimist.

Subscribe to Hippocratic Capitalism

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe