“You shouldn’t get the death penalty for not making your numbers.”
That’s what a friend of mine who runs a successful population health management (PHM) company told me. But that’s exactly what happened to my PHM company when we got too heavily involved in CMS (Medicare fee for service) sponsored disease management pilot programs that didn’t yield the expected results due to a myriad of well-documented study design and measurement problems that don’t need to be rehashed here. So, I was particularly interested in what my friend, Al Lewis, had to say about measuring PHM program outcomes in his new book: Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management.
Statistics, damn statistics
First of all, this book is a wonderful read. It reminded me of a book, How to Lie with Statistics, that I read many years ago (and just re-read). It was written by Darrell Huff way back in 1953 before disease management was invented by Al (yes, it was…just ask him!). As in Huff’s book, Al uses humor (really funny humor) to take aim at common mistakes in the use of “statistics” to measure PHM outcomes, using real-life case studies to illustrate his points (although Huff used real names in his book). As the one who “invented” the pre-post guaranteed savings methodology, Al certainly knows the topic well and is well qualified to challenge it. It’s kind of like hiring a hacker to tell you where the weaknesses are in your IT system’s security.
I am the furthest thing from an expert in statistics, so I’m not going to attempt to address any of Al’s specific methodological recommendations other than to say that the real benefit of this book is to draw attention to the need to use common sense and basic logic to check the plausibility of reported results. What I’ve learned in my 17-year career in PHM is that if people want to show that the program they’ve purchased or developed works, it will work. If they want to kill it, they’ll mess with the results until it’s dead. The problem is, there is no simple answer to whether a behavioral intervention such as PHM will save money, improve quality, or even change behavior. There are just too many other factors going on that could impact results—benefit changes, demographic changes, provider changes, new treatments, etc.
Why is this an important topic?
Why is this an important topic? Well, because while many of the measurement problems described by Al have been used by CMS and many others to say that “disease management doesn’t work,” most of the principles and practices of population health management are ironically being incorporated into the “solutions” being implemented today—such as Accountable Care Organizations, employer-based wellness programs, and bundled payments. So Al’s message in Chapter 5 that wellness, disease management, and coordinated care actually do work when done right is an important one.
Those who get upset with this book are more likely tired of battling the constant attacks on organizations that are trying to do the right thing. That right thing is applying care management, clinical data analytics, patient engagement, and tailoring treatment based on risk segmentation within a fee for service healthcare system that is calcified and resistant to change. Despite that, Al is right to challenge egregiously exaggerated outcomes and the “methodologies” that are being used to create them. He is also right to caution purchasers of these programs to use critical thinking rather than just accepting what they are told at face value. In my experience, there is no shortage of companies out there that will say anything to sell their stuff. That makes it very difficult for legitimate companies to compete.
What has been most frustrating for those of us in this field is that interventions such as disease management, wellness, and care coordination have been held to such rigorous cost savings standards when other interventions in medicine have not. Cries of death panels and rationing notwithstanding, when new treatments are approved for Medicare reimbursement or FDA clearance, the cost/benefit of those treatments is not taken into consideration—only whether they are equal to or better than other treatments on the market. As we all know, this practice has contributed significantly to the ongoing increases in healthcare expenditures. Whether or not health management interventions result in dramatic cost savings, organizations delivering these services are at least making an attempt to improve health, access, and quality. This is certainly better than the current fragmented, inbound system that these organizations are trying to fix. But perhaps population health management can serve as an example for the rest of healthcare by holding itself up to such scrutiny.
My one complaint
I do have one beef with this book. Its recommendations are really good enough that it needn’t have spent so much time attacking the Care Continuum Alliance (CCA) Outcomes Guidelines—particularly since Al’s criticism really isn’t completely accurate (full disclosure: I have been for many years, and currently am, on the Board of Directors of the Care Continuum Alliance). It is true that early versions of the CCA Outcomes Guidelines were attempts to outline a reasonable methodology and overcome many of the issues that Al raises. However, the current Version 5 (published in 2010, before Al’s book was published) is very deliberate in not specifying a single methodology. Rather it compares the strengths and weaknesses of various methodologies. The Version 5 document makes that very clear right from the start saying that it is not meant to be “prescriptive, formulaic, ideal, or the last word in evolving standardized methods.”
That being said, I highly recommend this book to anyone designing, purchasing, or simply interested in population health management programs. It is informative, instructive, and, above all, entertaining.