Did you ever hear the old joke where the boss says floggings will continue until morale improves? Flogging the data until results improve (or the data confesses) is not uncommon. Too bad.

In my career, I’ve worked with companies with over 100,000 covered lives; the claim costs of which could swing widely from year to year—all because of a few extra transplants, big neonatal ICU cases, ventricular assist cases, and so forth.


The “shock” claims

Here are some examples of the huge single case claims I’ve observed in recent years:

  • $3.5M    cancer case
  • $6M       neonatal intensive care
  • $8M       hemophilia case
  • $1.4M    organ transplant
  • $1M       ventricular assist device

This is not a complaint. After all, cases like these, enormous and unbudgetable health events, are the reason why we need health insurance.

All plans have one organ transplant for approximately every 10,000 life years or so, most of which will cost about $1M over 6 years. A plan with 1,000 covered lives will have such an expense on the average of every 10 years. Of course, the company may have none for 15 years and two in the 16th year. The same goes for the $500,000+ ventricular assist device surgeries.


Why claims data for small groups is so perilous

Looking at claims data for small groups is perilous, sometimes for large groups as well. Because of the high cost and relative infrequency of so-called “shock” claims (those over $250,000), you need about 100,000 life years for the claims data to be even approximately 75% credible. When a group with 5k lives say they did something that cut their claims costs, they can’t really know if the change made a significant difference for a couple of decades.

Here is an example. A smallish group with about 3,000 covered lives asked me to help calculate how much their wellness plan was saving. They had all employees listed in one of three tiers: active wellness participants, moderate participants, and non-participants. I warned them they didn’t have enough data to be credible but they proceeded anyway. They expected active users would have the lowest claim costs and so on. When the data were reviewed, there was a perfect reverse correlation. Active wellness users had the highest claim costs, moderate users had the next highest costs, and non-participants the lowest. In their final report, which I had nothing to do with preparing, and from which I had recused myself, they subtracted out big claims by the active and moderate users to get the results they wanted. In short, they flogged the data until it confessed. Alas.


Confirmation bias

One large company claimed huge reductions in plan costs by adding a wellness program. It turns out during that period in question they also implemented an “early out” incentive. Upon examination, the early out program resulted in a big reduction in the number of older employees which more than accounted for the reduction in claims costs.

Here is yet another example. I was at a conference a few years ago in which a presenter from a small company, about 1,000 covered lives, claimed to have kept their health costs flat for five years through wellness initiatives. While he got a big ovation, his numbers just didn’t add up. I asked him a few questions after his speech about what other changes he made during that period. He said they lowered their “stop loss” limit from $100k to $50K a few of years earlier. Then he admitted to excluding his stop loss premium costs, which were skyrocketing, from his presentation. With a little bit of mental arithmetic, I added that back in, which revealed his company’s total health costs were going up at the same rate as everyone else’s, perhaps even a little higher. Hmmm. I don’t think he deliberately mislead the audience. He just didn’t know better. When you hear boasts of big short-term impacts of wellness programs, beware of confirmation bias.


Questions to ask when evaluating health plan cost savings

When a company claims they implemented something that caused their health plan costs to drop 15% or so, ask a few questions:

  1. The big question is did the company adjust for plan design changes, such as raising deductibles and copays, that merely shifted costs to employees?
  2. Did the changes really save claim dollars?
  3. Did they factor in stop loss premiums?
  4. How many life years of data did they observe?
  5. Did the company exclude large or “shock” claims? (This is not uncommon, especially among wellness vendors.)
  6. Did it experience any big changes in demographics, such as through implementation of an early retirement program or layoffs that impacted older workers the worst?

When I’ve asked those kinds of questions, I’ve almost never seen a big claim of cost reductions by a small company hold up under scrutiny, and same for some big companies too.


The takeaway

Flogging claims data to get the desired results is all too common. That’s no surprise. Academics and big pharma keep getting caught doing the same thing. When you are in the business of evaluating results using claims data, skepticism is a very good thing.

Tom Emerick
Thomas G. Emerick is the President of Emerick Consulting, LLC and Host of Cracking Health Costs. Tom´s years with Wal-Mart Stores, Inc., Burger King Corporation, British Petroleum, and American Fidelity Assurance Company have provided an excellent blend of experience and contacts. His last position with Wal-Mart was Vice President, Global Benefit Design. Tom has served on a variety of employer coalitions and associations, including being on the board of the influential National Business Group on Health, the U. S. Chamber of Commerce Benefit Committee, and many others. Frequently in demand as a speaker for benefits and health care conferences, such as the internationally known World Health Care Congress, Tom´s topics include strategic health plan design, global health care challenges, health care economics, and evidence-based medicine.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.