Man in biz suit doing yoga at work 1500 x 1000
Photo source: Adobe Stock Photos

Healthcare organizations, including payers, have placed a great deal of emphasis on measuring the effectiveness of programs they use to empower people to make better choices about their medical care. But is the same effort expended on employer-sponsored wellness programs? If it is not, it should be. Purchasers need to ask pointed questions about the effectiveness of every program they pay for and provide to their employees.

Here are three good questions to ask your wellness program providers:

  • Has this program been shown to lead to the results you are promising?
  • What types of data do you collect to measure the results?
  • What type of analyses of that data do you perform to show the impact of the program (e.g. ROI)?

Overdiagnosis and overtreatment

Common sense, the basis for many wellness programs, is often at odds with science. For example, common sense suggests that detecting cancer early via screening programs will lead to better outcomes and lower costs for the payer. But the truth is more complicated.

Indeed, a person whose cancer is found at an early stage by a screening test may have a better outcome than someone who finds their cancer on their own at a later (larger) stage. But this may not attributable solely to the screening. Cancers found as a result of screening are generally less aggressive than self-detected larger ones that grow more quickly. In fact, according to a JAMA editorial on cancer awareness and screening,

“…national data demonstrate significant increases in early-stage disease, without a proportional decline in later-stage disease.”

The authors of the editorial went on to argue that we need a new word for these tumors, reserving the Big C word for “lesions with a reasonable likelihood of lethal progression.”

So, whereas common sense suggests that cancer screening may lead to better outcomes (e.g., more cancers detected at an early stage), science shows us that screening is not improving overall health.

Lest you are tempted to say that what’s the big deal if we screen a lot of people if at least some of them do have aggressive cancer. In fact, screening is expensive and puts more people on the road to a workup and treatment plan that adds substantial costs without meaningful benefit. Experts now recognize this as a form of overdiagnosis. Overdiagnosis inevitably leads to overtreatment and this costs, not saves, money.

Tips for employers providing wellness programs to their employees

Tip #1: Common sense is not enough to justify a program. Demand objective proof from an independent source such as a peer-reviewed journal.

Do not rely on common sense to tell you what the results should be. Instead, ask for peer-reviewed, published evaluations of the program or on a substantially similar program. If the vendor only has an internal evaluation, ask if their program results have been validated by outside experts. Remember, evaluations done by the vendor are likely to be biased in favor of the program.

Some vendors will tell you that their program is new and cutting edge so there are no studies to show results yet. However, very few programs will be truly unique, so the chances are that something similar has been studied. For example, a program might have a new way to deliver patient education about choosing non-invasive care instead of surgery. Although the vendor’s avenue of delivering patient education has not yet been studied, the impact of educating patients about treatment choices has been widely studied and clearly shows that patients who receive patient education on the topic are more likely to choose non-invasive care. So you can presume that this program may have a similar outcome. Obviously, that program’s results need to be determined to be sure. But, if what you really care about is reducing your healthcare costs, you would also lneed to look at studies that analyzed claims to determine net savings.

Bottom line: ask directly whether the program has been shown to lead to the promised results.

Tip #2: A superior wellness program has a plan for gathering data that is directly tied to the results being measured.

Many programs claim to not only reduce medical costs but also to reduce absenteeism and boost productivity. Very few have reliable data to back up these claims. Unless you are in the unusual position of not having to justify the program’s cost, you will need data to demonstrate results. That means having a data gathering plan.

Data gathering can be as simple as a two-question, online survey to see if a change has occurred. Or it may be much more complex involving amassing medical, pharmacy, and attendance records together to see whether the program lowered medical costs and illness absences.

Ideally, you gather just the data you need and not more. And, it directly links to the program goal.

Beware: having a lot of data is not the same as having a measurable result.  Knowing how many times a participant visited a website or utilized her wearable does not tell you the program’s impact. Look for data sources that are easy to collect and tie directly to the outcomes you hope to achieve.

A single question, if it is the right one, can reveal volumes. For example, “Would you say in general that your health is excellent, very good, good, fair, or poor?” This question has used nationwide in the Behavioral Risk Factor Surveillance Survey. It has been extensively studied and shown to be a reliable indicator of a person’s actual health status and a strong predictor of the use of medical services.

Data need not be extensive or complicated, but it does need to be relevant and accessible.

Bottom line: Having a plan for data gathering is essential.

Tip #3: Technical specifications provide a roadmap that leads you from collected data to a measure. If your program vendor has no such map, you have no results.

The final step is organizing the data into a measure. This is not as simple as adding up the number of hospital stays or ED visits. All valid measures consider the population and its fluctuations, generally by having a rate such as emergency room visits per 1,000-member months.

Published measure technical specifications are available from national organizations, such as NCQA. The program vendor should be able to cite one of these published measures or tell you how the measure is compiled. If they claim it is “secret sauce”, rest assured that they do not know how to measure results. There are no secrets in compiling valid measures.

Using published measures is greatly preferred because the measure has been thoroughly vetted and studied. There is more to this than meets the eye. While a measure might sound simple – such as hospital readmissions within 30 days of a discharge – the mechanics are quite intricate. The HEDIS technical specifications for its hospital readmission measure are 14 pages long, not including lists of thousands of procedure and diagnosis codes.

In addition, ask how success is defined. A vendor might compare your group’s rate to a national benchmark and declare the difference is savings. If you have only one program and nothing else is influencing your group, then perhaps this is a reasonable approach. Even if you had only one program, many other factors can influence a group’s behaviors in a single year.

For example, during the Great Recession, spending on health services went down for virtually every kind of patient. This wasn’t because everyone suddenly had great care management, or sharp consumerism, or better health. To the contrary, we as a nation probably had worse health during the Great Recession. The spending slow-down was a side effect of the overall economy and not the result in any improvements in health or healthcare.

Make sure the definition of success or savings or ROI is reasonable and does not give unearned credit to the program.

Bottom line: make sure the measure uses published specifications from a national organization or that its details are disclosed to you.

Have a no “yeah but” rule

By posing these three questions and refusing the “yeah but” folks, you will be able to make better decisions about what is and what is not a high value, high impact programs. Your boss will thank you and so will your employees.

Linda Riddell M.S.
Linda K. Riddell, M.S. is a Population Health Scientist and an Independent Validator. Her company, Health Economy, LLC, specializes in measuring outcomes for health and wellness programs, such as coaching, behavior incentives, and novel interventions. Clients range from state governments to private insurers to start-up technology companies. She is also Vice President Strategic Initiatives for the Validation Institute which peer reviews outcome measures for member companies. She has 30 years’ experience in health care, public and private health insurance, and health policy. She has a master’s degree in health policy and management from the Edmund Muskie School of Public Service.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.