Monday, February 10, 2014

Validation Verification and Vindication



Validation Verification and Vindication

We live in an era that loves to believe in logic and discipline as the basis of progress.  In near every discipline the clarion call to quality is that decisions must be evidence-based.  Evidence-based medicine is touted as the foundation for all diagnosis, investigation and treatment, based on the assumption that the overwhelming mass of studies performed and published provides us with all the information that we need.  

But it is pretty clear that many (most ?) of those studies are too small, too uncontrolled, and too biased and essentially not reliable, but we can do our best by combining studies and through meta-analysis can make the proverbial silk purse.  Despite this we generate tons of confusing and contradictory guidance in most things that are important, such as nutrition, vitamin usage, cardiac risk and anti-lipid therapy, and exercise, and on and on.  

For an interesting read on this subject, today I found an editorial in the Saudi Gazette written by Gary Taubes of the New York Times in which he talks about the “field of sort-of-science” in which “hypothesis is treated as facts”.   

The problem is not that we suffer from insufficient data or insufficient tools; indeed we live in the era of Big Data Analysis where thousands of databases with billions (and trillions) of points are available for picking and mining.  No it is not a problem of insufficient data; rather it is a problem of dirty data, poorly defined, incomplete, improperly gathered, and all too often inappropriately or insufficiently analyzed. 
Ultimately the problem is that we are left with the same lingering question “Who do you trust?”

To get on top of the solution we think in terms of Verification (is the analyzer capable of providing a repeatedly reproducible result within a narrow range of allowable error?) or Validation (do test results provide a result that is consistent with the gold standard and can they distinguish groups of subjects who are consistently “positive” from those that are consistently “negative” as measured by other parameters?).
But in addition I argue that we even more heavily rely on Vindication, which in this context I use to mean, “My approach must be right because I came to the same conclusion as noted in this other study”.  It’s a pretty soft measure, and a throwback to “evidence by authority” which predominated in the Dark Ages.  But it does have its compelling aspects.
 
I will give you an example.  In the Winter 2013 edition of Harvard Business Review Edward Hallowell wrote an article about a study published by Gilbert and Killingsworth in Science (Nov 2010) about people’s ability to focus on what they are doing and their sense of happiness.  What the study pointed out was that based on a pool of 2200 adults and over 250,000 observations, about 46 percent of people will commonly have their minds wander, even if they are enjoying what they are doing.  Hallowell, a psychiatrist and prolific writer on the subject of distraction writes, “Not only does such a lack of focus lead to unhappiness, it results in errors, wasted time, miscommunication, and misunderstanding, diminished productivity, and who knows how much global loss of income…”   


Now I am not a psychiatrist  (although I did a huge wack of training in psychiatry in my younger years), but in that sentence Hallowell summarizes the singular challenge for medical laboratory, and indeed all, Quality.  Recognizing that distraction occurs regularly is totally consistent with James Reason’s views on human error (slips and mistakes) and is consistent with Decker’s views on response to error, and is consistent with non-linearity of cause and effect in Chaos theory.   Lots of people (maybe most? maybe all?) lose concentration during work, and when that happens the opportunities for error and confusion arise.   Telling people to try harder is not an answer.  Telling people to Do it Right the First Time, is not an answer either.

Putting in systems to prevent the consequences of inattention and distraction (think Lean Poka Yoke) and putting in systems that will pick up on errors as early as possible, and putting in systems to reduce the repeating of error have a far more reasonable likelihood of success. 

I understand that my buying-in to this hypothesis is exactly what Taubes (sort-of-science) was concerned about (hypothesis becoming fact).  He would say that the linkage between distraction and error remains unproven.  But I have to say, without apology, that it works for me and I believe it.

Vindication.

No comments:

Post a Comment

Comments, thoughts...