Thursday, January 5, 2012

Modern Metric Morass.


First off, I personally am looking forward to a happy, healthy and successful 2012, and I wish the same to the readers of this blog.

To continue on, I have mentioned on a number of occasions David Axson’s  book The Management Mythbuster.  Today I will make my last commentary on this book.  First off, I want to say that this book is a valuable read for anyone interested in Quality, not because it is brilliant and agrees with everything that I believe, but rather because it is brilliant and puts forward some very compelling discussion points on organizational management.   That being said it is an opinion book. 

If there is one area where Axson falls down it is in his 10th chapter, Lies, Damn Lies and Performance Metrics.   It is not really his fault, we all are pretty schizoid when it comes to statistic and measurement.  On the one side we all recognize that if we don’t study (as in PDSA) then we can’t meaningfully Act. 

The problem is at the other end.  First there is a notion that if one metric is good then 10 is better and a thousand has to be even better (Axson’s first sentence is “we are drowning in metrics”).  Second we gather metrics based more on what seems to be popular (what we hear about or read about) rather than based on a strategic process.  (Axson calls them the Metric of the Month).  And third, even though many (most) of us are pretty statistics illiterate, we gather all sorts of analyses because we can.  The reality is that statistics software doesn’t require much statistics knowledge; all you do is fill in the fields and hit “return”. The computer doesn’t care if the result is meaningful or nonsense; it just spits out a number.

The result is that all too frequently we spend too much time measuring and not a lot of time testing the relevance and utility of the information.  Excessive metrics gathering is a classic form of TEEM (Time Effort Energy and Money) wasting.

Axson provides a pearl when he offers that the way out of the metrics morass is to focus first on what is really important (core) to the organization.  In business it is about profit and loss and in baseball is it about wins and losses.  I suggest that for the medical laboratory is about getting the right result to the right set of eyes at the right time.   

So if a good approach is to focus metrics on what matters, let me offer the following:

  • Right result:   Measure of pre-and analytic error reported in errors per million tests.
  • Right eyes:     Measure of misdirected reports reported in errors per million tests.
  • Right time:     Measure of late reports again reported in errors per million tests.


Actually that is a pretty good place to start.   All three should be straight-forward to capture and measure.  Since all calculations are calculated to a common base value, significant improvements or deterioration can be monitored.   If we have problems here we are not doing what we are supposed to be doing.

One can argue about whether one would need to bench mark results to industry standards, but the reality is that there are no or few benchmarks for medical laboratory quality other than the expectation for ZERO defects.   We will talk about the definition of what constitutes a late report another time. 

Now I would like it to be that simple; but I also recognize that I know that laboratory life tends to get more complicated.   There are other important issues like safety and competency and quality control performance that laboratories also need to monitor.  There will always be new issues that arise and require monitoring.  And there will always pressures to measure more.  That being said let me offer the following:

  1. Indicators implemented to follow up on a single point-in-time problem rarely need to be followed for any more than 4-6 months; at the very most a year.  
  2. If you have not used the information garnered by a metric for 2 years, then decide if the measurement is worth the time and effort it takes to generate.
  3. If you cannot explain in specific terms what a metric provides, then don’t measure it. 
  4. If there is no organizational memory about why you are measuring something, then it probably makes a lot of sense to stop immediately.
  5. If a metric does not fluctuate, either because it is always provides positive information or always provides negative information regardless of the amount of focused action, there is something wrong somewhere.  


No comments:

Post a Comment

Comments, thoughts...