Featured Post

Healthcare Customer Satisfaction: More Talk AND More Action

Healthcare Customer Satisfaction: More Talk AND More Action Customer satisfaction (Voice of the customer) is a recurrent th...

Friday, February 24, 2017

Quality Errors - When ISO gets it WRONG!

A ways back maybe 1978 (?) when I was training to be a medical specialist I was doing a research review of the records of patients with a particular infection.  This required going through medical records going back 10 years.  
I remember this even today because after reading one particularly horrific record, I realized it had been written by ME about 6 years previous.  My more experienced self was embarrassed by my younger self’s incompleteness and clinical illogic.  It was a painful lesson.

Today, I  had a similar (sort of) experience.

I was going through, once again, the standard ISO15189:2012 (medical laboratories – requirements for quality and competence) in preparation for an upcoming presentation.  I must have read this document maybe 100 times.  But maybe (by which I mean clearly) this time I was being more attentive. 
I was looking for what we wrote about Quality Indicators, a personal topic of interest  See: [ http://www.medicallaboratoryquality.com/2017/01/lamp-oil-and-quality-indicators.html ].  

Here is what ISO 15189 says within the definitions:
“3.19:  quality indicator -  measure of the degree to which a set of inherent characteristics fulfils requirements

NOTE 1 Measure can be expressed, for example, as % yield (% within specified requirements), % defects (% outside specified requirements), defects per million occasions (DPMO) or on the Six Sigma scale.

NOTE 2 Quality indicators can measure how well an organization meets the needs and requirements of users and the quality of all operational processes. 

EXAMPLE If the requirement is to receive all urine samples in the laboratory uncontaminated, the number of contaminated urine samples received as a % of all urine samples received (the inherent characteristic of the process) is a measure of the quality of the process. “

So what’s wrong with that you are probably saying.  Looks like reasonable stuff.  And it is, until you read the EXAMPLE.  

I was one of a very few microbiologists on the committee that wrote the standard.  I hope that I was not the person who contributed the example, because it is worse that terrible.  My more experienced self today would strenuously object to it being incorporated into the standard, because by its presence, gives license to laboratories to waste their time collecting useless information that sheds NO light on laboratories’ quality or competence.  

I feel some sense of responsibility for allowing the inclusion of a wrong and misleading example.

This is an “in the weeds” issue and one that may not be clear to folks who are not medical laboratorians, but the issue in NOT unique to medical laboratories.  There is  a general and guiding principle: there is no quality or indicator  value in collecting and monitoring information on activities that you do not and can not control.  

To address the aforementioned example, without getting too icky and biological, urine that is in your bladder is supposed to be sterile.  Having bacteria in urine in the bladder generally means the person has a urinary tract infection.

The problem is that when urine leaves the bladder on its way to the toilet or the specimen collection bottle, it can pick up bacteria inside or around the anatomic channel that goes from the bladder to the outside.  Those bacteria are considered contaminants.  Without going into detail, bacteria that are found in bladder urine and bacteria found as contaminants are often the same, so it may be difficult to look at a culture test result and sort if the bacteria came from the inside or outside.  There are washing and collecting techniques that can be used to reduce the amount of contamination that exiting urine may be exposed to.  

But  (and here is the point)  Laboratory workers do not collect urine samples; people go into a bathroom and “pee in private” and then leave the sample on a counter or in a refrigerator.  

We can explain and instruct to people about washing and how to collect the sample, but we don’t join them at the toilet (too gross!!).  The only ways we can control contamination are by explaining to people or putting explanatory pictures in the bathroom, to which people may or may not pay attention.  

So we can, as the example suggests count the “number of contaminated urine samples received as a % of all urine samples” but the number has no meaning and worse, we can do nothing to increase it or decrease it.  So while it may be an interesting number, it has nothing to do with quality or performance, and collecting the information is a waste-of-time.  There is nothing here that sheds any light on quality.  (no LAMP OIL here!!)

So here is a message to the current members of the ISO technical committee who will soon be reviewing and revising the standard:  KEEP the requirement for Quality Indicators, but KILL the example.

Wednesday, February 1, 2017

More Reflections on Philip Crosby’s Reflections on Quality.

The quality improvement process is progressive.  One doesn’t just go from awful to wonderful in a single bound 

Crosby’s book on reflections contains a spectrum of trite such as “Anything that tastes good is bound to be bad for you (110)”, which is the all too common moan of the perpetual dieter, or inane “Once you put on a suit, no one tells you the truth anymore (83)” – I’m sure there is a grumbling story behind this.   

And there there are some very insightful, such as “The quality improvement process is progressive.  One doesn’t just go from awful to wonderful in a single bound.”  (234).  

I am intrigued by this reflection, not because it is so insightful (which it is) but that it comes from the mind of Philip Crosby.  

I never got to meet Crosby, but I have read his books.  He seems in writing to be a pretty black-and-white sort of guy.  “Do it right the first time” or “Anything caused can be prevented” or “The Performance Standard is Zero Defects” and “Zero Defects means doing what we agreed to do when we agreed to do it.  It means clear requirements, training, a positive attitude, and a plan.”   

This is all pretty absolute and without any wiggle room.  And none of that sounds even remotely aspirational.  You either did it Right or you did it Wrong.  

And it is a terrible message.

There have been few messages that have had so much resonance as DIRFT, or has spawned more corollaries.  “Why is there so little time available to do it right the first time, and so much time allocated to repeat it”.  “You can’t do it right the first time, unless you know what right is”  “You can’t do it right the first time, unless you know what it is”, “if you don’t have the time to do it right, when will you have the time to do it over”, and my personal favorite “it takes less time to do a thing right, than it takes to explain why you it wrong”.   

Almost all of these have the ring of bon mots; tone without substance.

In the medical laboratory and I suspect in a lot of other jobs and positions, nobody wants error, and nobody wants to make mistakes, and given a choice we “always” avoid error. But most error is human derived, some systemic, but a huge amount personal, and mostly silent or subtle;  distractions, slips, mis-interpretations, mis-understandings, errant keystrokes.  And while these still lead to errors and consequences, finger wagging about zero defects, does not make things better, it makes thing worse.  

It’s like when you mom or teacher or coach berated you with “you have to try harder”.

The quality challenge is to aspire to no errors, but to be vigilant in looking out, and diligent in catching as soon as possible.  In some situations, we can reduce error through inserting  some effective poke yokes (error blocking) tools and techniques such as required daily and in-run (real-time) quality controls, preventative maintenance, and error-reducing software and providing sufficient time-outs to allow reducing focus and stress.  More diligent attention to internal audits and proficiency testing can definitely help. 

But not always.

So that is why I was so pleased to see that Crosby also had a progressive side which acknowledged that nothing in reality is so cut and dry as zero defects, and that an organization needs to have enough tolerance and patience to accept a planned process of little steps, that allow time to reach a point approaching fewer mistakes and maybe even “zero defects” at least for a little while. 

On a related but different topic, work on our October Quality Conference is coming together nicely.  We have a working group that has come up with a number of very good topics, and some potential speakers.  One topic of special interest is a debate (Is Patient Centered Care in the Medical Laboratory Even Possible?),
We have our location, and assurances from a number of sponsors.  I expect that we will be able to post our first advert on time.  In the meantime,

Save the Date
POLQM October Quality Conference.
October 1-3, 2017
Vancouver BC Canada

Friday, January 13, 2017

LAMP OIL and Quality Indicators

So I got an opportunity to participate in a small but important workshop the other day on developing a large regional laboratory quality strategy and presented on the value and importance of developing Quality Indicators.  This was not a new topic for me, having first put on QI workshops and created a QI worksheet more than a decade ago. 
To put Qis in perspective, Mark Brown, who published Keeping Score: Using the Right Metrics to Drive World Class Performance in 1996 wrote “Many organizations spend thousands of hours collecting and interpreting data.  However many of these hours are nothing more than wasted time because they analyze the wrong measurements, leading to inaccurate decision making.” 
At the same time Philip Crosby wrote in his Reflections “ Quality Measurement is effective only when it is done in a manner that produces information that people can understand and use.

Both were true not only 20 years ago, but sadly, as we visit medical laboratories, it is still true today.  Folks faithfully monitoring “Turn Around Times”  and contamination rates.  They make their graphs, pat themselves on the back on a job well done.  But the results are far from understandable and usable.  Their customers don’t know or indeed care because they aren’t involved at any part of the process.  

And in the meantime, medical laboratories who have never been particularly open to public engagement are quietly losing ground to expanding Point-of-Care suppliers.
Of the five ways that laboratories assess performance (Accreditation, Proficiency Testing, Internal Audits, Quality Indicators, and Customer Service),  Quality Indicators can be the most focused, elegant, track-able, and telling (and available for public awareness and engagement), so it makes sense to focus energy on getting them right.

When I put my workshop materials together way back when, I proposed there were (are still are) seven critical criteria that developed Quality Indicators must meet in order to have any chance of being successful.  Leave one (any one) out, and you can pretty much guarantee failure.

OBJECTIVE:  Know what you want to measure and why, Be precise and specific. 

MEATHOD:  Indicators are by their nature be things, events that can be measured (counted or timed, or weighed).  And more specifically your QIs need to be measured by you.  If you don’t how you are going to capture the information, then don’t start.

LIMITS:  Before you start collecting, know what your level of acceptance is and what is a critical level of error.  Get input from your customers.  And take into account Risk.   Telling even one person they have HIV/AIDS when they do not is a BIG problem.  I understand that others may see this differently, but comparing your results against another organization (bench marking) rarely works, but they aren’t you and you aren’t them and too many variables get in the way.

INTERPRETATION:  When you gather your information, does it tell you, and others, something about your Quality?  If it does not, then it is hard to call it a Quality Indicator.

LIMITATIONS:  No measure is absolute and perfect.  That’s why we have Measurement Uncertainty.  (MU is also not absolute or perfect).  If you don’t appreciate that variables can impact on your indicator, you may go down the wrong rabbit hole.

PRESENTATION:  If you can’t express your results in an easy to comprehend manner then it is going to be tough to have impact to engage the people you need to engage.  Maybe it is a graph, maybe a picture, maybe a sentence – but definitely not a report.  

ACTION PLAN: If everything is pointing in the right direction, do you have a plan about what happens next.  More importantly when everything is pointing south, is the wrong time to be thinking about what to do.  Have you plan in place before you start.  

If you take the seven initials  OMLILPA and fiddle, you end up with LAMP-OIL, a rather perfect anagram

Done well Quality Indicators can shed a LAMP OIL  Bright Light on Quality Performance.