Dear AI Santa: Are You Real? And Can You Send Me a (Human) Doctor for Christmas?

Dear AI Santa: Are You Real? And Can You Send Me a (Human) Doctor for Christmas?
"AI Santa" generated by Dall-e

In December 2020, in the midst of the COVID-19 pandemic and just prior to the first vaccines, parents around the world grappled with a question: Would their children be able to sit on Santa’s lap this year? It seemed clear by then that young children were at low risk of symptomatic disease, but middle-aged and elderly men were definitely at risk. And beards not only didn’t prevent against the virus, but had potential to interfere with the effectiveness of N95 masks. (Plus, strange red-suited men are scary enough to a todder without introducing a mask into the picture.) So: AI Santa to the rescue!

Which raises a metaphysical question: Is an AI Santa as real as a human Santa? (See ChatGPT’s answer at the bottom of this newsletter.) What does “real” or “human” even mean in this context? Is the illusion of humanness sufficient, or does there need to be an actual person in a red suit? In the case of Santa, where the target audience is the under-8 crowd, perhaps these sound like trivial questions. But in healthcare, they’re a big deal.

At a recent Forbes Healthcare Summit, Judith Faulkner, CEO of Epic Systems, dropped this little bomb: “So far, our clinicians and patients have found that they like the AI's response better than the human's response because the AI is more empathetic.”

Really? AI is more empathetic than humans? Now, maybe I should give Faulkner the benefit of the doubt, and perhaps she meant to say that AI-generated responses to patient questions were judged to be more empathetic than clinician-generated responses. Or maybe she’s so empathy-starved, like some of her fellow billionaire entrepreneurs that perhaps she actually meant what she said.

Regardless, it’s obviously not true. AI large language models and chatbots aren’t capable of empathy. They’re not even sentient, let alone capable of relating to sentient beings. What they are good at is stringing together sequences of words that mimic empathetic language. It’s an effective illusion, but still an illusion. Now, the research Faulker (presumably) based her comments on didn’t pretend that AI is actually empathetic. Here’s the JAMA Internal Medicine article from last summer that seems to have kicked off this topic. The researchers in this study took written medical questions from a Reddit forum, posed them to both volunteer clinicians and ChatGPT, and then asked people to rate rate the quality and empathy of the responses. And as you probably already know, ChatGPT did better than the humans in this extremely artificial test. But it doesn’t mean that ChatGPT is actually empathetic, or knowledgeable, in a real world sense. It just means ChatGPT can generate text that feels empathetic to readers.

Why is empathy important in medical communication, anyway? It’s not, as Faulkner implies, simply a matter of preferences, i.e. “liking” a chatbot’s response to a question. It’s also not just about patients feeling valued and heard, though this is getting closer. The real importance of empathy lies in the fact that communication is such a critical element of healthcare, and that empathy improves the quality and reliability of communication. Empathetic physicians listen better, and they’re more likely to ask the critical questions that lead to accurate diagnosis. Patients, for their part, share more openly when they feel listened to. Both of which are critical to medical success. You see this in studies of language translation within healthcare, where medical outcomes are strongly correlated with communication outcomes. (More on this in a future newsletter.)

So if you don’t want your doctor or your therapist replaced by an AI chatbot (and you definitely shouldn’t!) then can AI still improve medical communication? I believe it can. But it requires a different sort of use case, one that supports clinicians rather than replacing them.

The most obvious use case, and in fairness, the one that Faulkner discussed at the Forbes conference, was the use of AI large language models to draft written patient responses to questions, so that physicians can edit and send them. The idea being that the physicians will be both faster and better communicators as a result. For this to work, there really needs to be a lot of work on the user interface so that physicians actually mentally engage with the questions and responses rather than rubber stamping the AI output. But if done well, it could de-burden doctors and benefit patients at the same time.

A second AI use case, albeit one that requires even more R&D to pull off, will be AI-powered empathy feedback to clinicians. Sort of like a friend or spouse discretely poking you at a party when you inadvertently say something inappropriate.

Empathetic human connections are foundational to the practice of medicine. With more work, AI apps might be able to strengthen these connections in the future. But it’s critical that we don’t mistake AI assistance for AI humanity, or build chatbot applications that replace rather than support the human caregivers at the heart of medical encounters.

Oh, and here’s how ChatGPT responded to my question, “Is an AI Santa as real as a human Santa”: 

“…while an AI Santa may exist in the realm of technology and simulation, a human Santa is a real person who takes on the role of Santa Claus during holiday festivities. Both concepts involve the representation of the Santa Claus character, but they differ in their nature and existence.”

Merry Christmas and Happy Holidays!

Subscribe to Hippocratic Capitalism

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe