The reviews are coming in
and the verdict is clear:

Cracking Health Costs and
Why Nobody Believes the Numbers
are the best books ever written.*

* By me



Validating Your Own Outcomes

February 10, 2012

Mr. Charles Ulmer Farley
[address]

Dear Mr. Farley:

I have reviewed the [Your Company] disease management outcomes from July 2010 to June 2011 and can state that they are accurately represented by the accompanying event rate trend chart.

This specific methodology measures rates of adverse medical events in the event categories most likely to benefit from disease management and most often disease-managed. Because this methodology is based on the entire population and not just the “population of people with the condition,” there is no regression to the mean. This methodology is in very strict compliance with the only provably valid component –event rate measurement -- of the most recent edition of the Outcomes Measurement Guidelines published by the Care Continuum Alliance, which is otherwise invalid As mentioned above, it is also the standard used as the basis for the Health Industries Research Council (HIRC) “Best Disease Management Program” awards, which [Your Company] has won several times using this same methodology.

This methodology is described in the accompanying document, and at greater length both in my book The Next Generation of Disease Management (www.aispub.com) and my forthcoming book Why Nobody Believes the Numbers (John Wiley & Sons; June 2012) http://www.dismgmt.com/why-nobody-believes-the-numbers .

Compliance with DMPC guidelines means that measured outcomes will validly portray true economic outcomes, that economic outcomes are likely to be favorable, and that most sources of systematic error are avoided. To build on the statement above about regression to the mean, encompassing the entire disease population ensures that all events are counted every year. In a pre-post measurement, all events are counted only in the baseline year. Thereafter, events are only counted if suffered by previously identified members. For instance, everyone with a heart attack in the baseline year is included in the baseline year (as well as people judged to be at high risk for infarction, based on claims data). However, some people with heart attacks in a contract year were not identified as high-risk during the baseline year and therefore aren’t counted. By counting everyone with a heart attack in the baseline year while not counting everyone with a heart attack in the study year, the number of heart attacks (and hence the cost of cardiac disease) will always be shown to have fallen even if the number of events remains the same. This bias cannot happen in a methodology which counts every heart attack every year.

While the methodology is both mathematically and epidemiologically valid, meaning free of “systematic error” that will always bias results in one direction, no methodology is free of random error. The major sources of random error as are follows:

  1. It is possible that chance alone biases outcomes. [YOUR COMPANY] could have a good year. However, by extending the same methodology over multiple years, the influence of chance is diminished as a trend is revealed.
  2. It is possible that [YOUR COMPANY]’s demographics are changing. However, [YOUR COMPANY] is aging faster than the population as a whole, so any change due to demographics alone would lead to less favorable results.
  3. Improvements could be due to changes to benefits design made concurrent with the implementation of disease management. In that case it would be impossible to separate out the improvements due to disease management from the improvements due to the other benefits design changes. An example for [YOUR COMPANY] would be asthma, where changes made to the benefits design had the effect of increasing compliance. Because it is impossible to parse out the improvements according to the cause , for the purposes of this letter “asthma disease management” is defined as all proactive activities design to reduce asthma attacks.
  4. It is possible that the underlying trend of adverse event rates is itself changing. This is true, and is captured very conservatively in the national average of 25 other payors, “very conservatively” meaning that the national average includes not only secular trend but also reductions in events due to disease management itself in other payors. Because at this point it is not possible to find populations where no DM is done, a health plan doing DM in a typical manner may not be viewed as effective because the average itself is declining due to DM. That is why we extend the analysis over multiple years and look at variance from the mean over time.

With those four qualifications, I am able to state that in 2010-2011 [Your Company] achieved a rate of adverse medical events considerably lower than similar populations in the United States as a whole, on an absolute basis and adjusted for age, making it the best relative performer among employers in the database. Further, this positive variance increased over the five-year time period in question, especially in respiratory conditions.

Sincerely,

Al Lewis
President


Disease Management Purchasing Consortium International, Inc. .

890 Winter Street, Suite 208
Waltham, MA 02451
Phone: 781 856 3962
Fax: 781 884 4150
Email: alewis@dismgmt.com