Everyone knows that organizations need metrics. Your goals need to be SMART (Specific, Measurable, Achievable, Relevant, and Time-bound). You need KPIs (Key Performance Indicators). And pay should be based on performance, which is to say, based on quantitative metrics. Right?
Not so fast. Measurement is a powerful managerial tool. And in the age of big data, it’s getting even more powerful. But therein lies the danger. Don’t get me wrong. I love metrics. My undergraduate degree was in mathematics, and my masters thesis was based on a statistics model that I designed to measure the effectiveness of hospital infection control. But powerful tools require skill and thoughtfulness in their application. And right now, I’m seeing a lot of truly awful uses of metrics in the healthcare industry.
This week’s newsletter was inspired by this quite depressing NYTimes article about medical professionals being treated not as professionals, but rather as cogs in the machine, working for the Man. What really caught my eye, though, was the role of metrics. Productivity metrics and “quality" metrics alike force medical professionals to spend enormous time on busy work, often with no time left over for cognitive work and other important activities that aren’t being measured.
But blaming the metrics themselves would let senior decision makers off the hook. Metrics don’t hurt people; rather, bad managers with metrics hurt people.
Here’s my personal list of metrics pet peeves:
- Creating metrics to satisfy the desire to have goals. I experienced this repeatedly during my time on a corporate executive committee, under multiple CEOs. The starting point was the belief that we had to have corporate goals, ideally both annual and quarterly goals. And so the group would first hammer out a consensus list of what we thought was important, and then come up with one thing to measure for each item on the list. The consequences of this lazy approach: First, the items on the list weren’t prioritised against each other, meaning that medium-importance items often crowded out or diluted high-importance ones. And second, most of the measures were misleading (e.g. due to statistical noise). Fortunately(!) the rest of the company mostly ignored these corporate goals.
- Creating incentives for employees to cheat. When goals are unrealistic, and the punishments for missing those goals are harsh (or incentives to hit them are lucrative), then people will use their creativity to game the system. One famous example was the Wells Fargo scandal, where employees were pressured to get customers to open more accounts, so many employees secretly opened accounts that customers neither knew about nor wanted. Another example is the Washington, DC teacher scandal, where public school teachers helped their students cheat on standardised tests. This one ended up having career-ending consequences for some teachers who didn’t cheat (their students’ performance appeared poor in comparison to the students from cheating classrooms, so they were fired).
- Managing the things that are most measurable. Basically, the equivalent of looking under a lamppost for your lost keys, because that’s where the light is, even though you probably lost the keys elsewhere. It distorts priorities, because measurability is completely uncorrelated with importance.
- Pretending that quality and quality metrics are basically the same thing. This one is particularly pernicious in healthcare. There’s an entire quality industrial complex devoted to this. It starts with payers (private insurance and Medicare alike), whose customers want to pay based on value rather than volume. So payer have to find some metrics to base payment on. At this point, the National Quality Forum helpfully steps in (irony intended here) with metrics so thoroughly researched and vetted that few people dare criticise them. What’s wrong with all this? First of all, the metric creation process has so much overhead to it, esp. when you add in the complexity of collecting and standardising data across diverse settings, that only a tiny fraction of healthcare outcomes end up on the list. Second, hospital leaders devote most of their quality management resources (including IT resources) to measuring and managing the 0.01% of things that impact payment, as opposed to the 99.99% of things that don’t. Third, just as with Wells Fargo staff and public school teachers, hospitals can get extremely creative in gaming the metrics.
A few take-home principles:
- Designing good metrics takes skill and analysis and hard work.
- Metrics intended for quality improvement need to be separated from incentives.
- Metrics need to not just measure the current state, but also detect/quantify improvement. Improvement is harder to measure reliably than baseline performance, for mathematical reasons I won’t get into here.
- Don’t be afraid of subjective measurements. In a field like healthcare, much of what’s important involves human factors that are best assessed subjectively. Forcing these issues into a quantitative box leads to all kinds of distortions.
- Holistic > precise. When the big picture isn’t right, then precision on the details is a waste of time.
Finally, here are two book recommendations. Neither is about healthcare, but both have lessons that are extremely important to healthcare.
- Uncommon Service, by Frances Frei and Anne Morriss. Provides some surprising insights on managing performance in service industries. The authors give great examples of how using the wrong metrics, such as call handling times in a call center, leads to the opposite effect from what is actually intended (namely, efficiently resolving customers’ problems).
- Weapons of Math Destruction, by Cathy O’Neil. Gives examples of how poor applications of algorithms (which are basically just complex metrics) have caused enormous social harm. (One of her chapters is on teacher cheating scandals.)