Saturday, May 18, 2013
ASM Quality Workshop
Today we participated in the Quality Workshop at the Annual General Meeting (ASM) for the American Society for Microbiology. In some respects this is the perfect union. The ASM attracts about a bazillion people from around the world. It is big enough to fill almost every hotel in Denver. The Quality Workshop is a bit of a niche interest, but we managed to have people from across the US, Canada Lebanon, South Africa, Australia and the Republic of Cameroon. Small conferences will attract interested folks, and mega-meetings will attract broad international crowds. We were lucky to end up with the best of both.
Without wanting to overstate the situation, I think the light has gone on in the United States. While at one time the CLIA regulations were the vanguard leaders in the world, many would say at 25 years old, CLIA has become a tired millstone in serious need of revision. While there are many US laboratories that don’t give a damn to quality, those that are recognize that being accredited to CLIA provides no evidence of interest in making their laboratory better.
Fast, cheap, east, mandatory and sufficient; good-enough is good-enough.
The folks that attended the workshop were of a different mindset; interested in learning about international quality, with higher aspirations for their laboratory’s reputation and recognition.
The meeting was put together by CDC and included two CDC speakers, a well-informed speaker from another laboratory and me. One can find the names of the other speakers by going to the ASM Annual General Meeting site and looking for speakers at workshop WS:04.
It was an excellent meeting full of practical tips, and perspective.
I gave a good overview of why Quality Management belongs in every laboratory.
A very good presentation was given on CLSI and that organization’s participation in the Quality area. A very good crosswalk was presented on how CLSI guidelines lined up with the requirements of ISO standards. It was pretty clear that CLSI is contributing to US medical laboratories by providing a series of documents that help implement Quality.
What was really interesting to me was the presentation on measurement uncertainty from a laboratory in the US that had decided to get accredited to ISO/IEC 17025 very early on, had has had to struggle with MU. A model for application of MU for quantitation of urine cultures was presented.
I thought it truly excellent. We all agreed that MU is an irrelevant waste of time, but has to be dealt with if a laboratory wants to benefit from the achievement of ISO accreditation. Having to jump through hoops purely to satisfy accreditation bodies may be stupid and evil, but it unfortunately an unavoidable stupid evil. What Alice presented was a model that would get it done in as painless a manner possible. And that probably is the best way forward.
And more to the point, the information is generated solely for internal holding. Do not present the information to anyone, other than the accrediting body.
What I thought amusing was her mentioning that the accrediting body required her to actually to through the arithmetic during their site visit. Just sad.
The last presentation was unfortunately cut short because so many questions were being asked. It was too bad because it was shaping up as an excellent presentation fo some practical tips on implementing Quality on an voluntary level. As mentioned, it your laboratory is not going to go for full ISO accreditation, you are still better off implementing some easy steps along the way, and gradually building your Quality inventory. It is not the best way to implement Quality but if management and staff are committed, it can work. I am thinking of inviting her to give the presentation again in Vancouver at the POLQM Medical Laboratory Quality Conference Meeting October 16-17-18 2013.
If you want to meeting an impressive faculty, plan to come to the meeting. If you want to learn and discuss and debate Quality with peers, plan to come to the meeting. It you want to participate in a World Standards Day celebration, plan to come to the meeting.
More info VERY SOON.
Sunday, May 5, 2013
Over my career I have had a bipolar relationship with medical laboratory accreditation bodies. On the one side I see accreditation when it performs its functions as an external assessment body as an important and valued partner for laboratory Quality. Being an outside set of eyes with both an understanding of Quality and laboratory practices and armed with an established standard such as ISO 15189:2012 or ISO/IEC17025:2005 or one of the related documents that has been adapted from the parent document and used on a local or regional basis, accreditation bodies can be powerful supporters of the laboratory Quality performance.
On the other side I have been particularly critical when accreditation bodies believe it is within their purview to write or cherry pick their own standards or when they allow for loose understanding and even looser interpretation by peer reviewers. Any assessor who starts an accreditation observation with the phrase “Well in my laboratory we do it this way…” is a bad assessor should be terminated immediately. I don’t care what someone does in their laboratory; I need to know if my laboratory is meeting the word and spirit of the established standard.
And it irks me when folks who have never worked in a laboratory or have no recent understanding of laboratory activity believe they are appropriate persons to be take authority for creating standards.
And I really fatigue of Accreditation Bodies that year over year a hundred percent success rate on laboratory accreditation. While I don’t think it is nice that some laboratories are closed and people lose their jobs or hospitals or communities are inconvenienced, I find it far worse when weak laboratories are given admonitions, knowing full well that nothing is going to happen. If a laboratory has a poor performance, it is far better than the inconvenient thing happen and the problems get addressed rather than wait for the horrific failure to occur.
And what especially annoys me is the verbiage that says that Accreditation can assess the competence of a laboratory. What total nonsense. Accreditation visits are staged events, barely longer than a Broadway play. The notion that a group of people can come into a laboratory on a given day and make anything other than superficial commentary on the activities of the people working on that day is a good example of a hallucinatory pipedream.
Recently I was in discussion with a person with responsibility and authority and commented that the olden days the technical requirements of a laboratory were both manual and working knowledge based. Laboratory workers need a lot of hands-on skills and accurate recall of intricate data. Today’s laboratory is different; autoanalysers have taken over most of the manual and mental skills, leaving staff requiring a new skills focused mainly on computer and data handling. In the olden days, laboratory workers needed to be able to manually maintain hardware and sometimes ripping it apart and then repairing it. No longer. Equipment these days needs service experts or more often, is just swapped out.
Skill sets come in two categories, routine and crisis intervention. Both work well the vast majority of time, with the rare and irregular slip and/or distraction that leads to error. The reality is slips and distractions do not lend themselves to measurement for competence on a given day by a given group no matter how clever they are. Most site visits will never occur on the day that an error happens.
What accreditation bodies look these days is evidence that Quality Control and the Quality System are intact and being followed faithfully The accreditation body is determining if the organization is meeting its laboratory-relevant Quality system in a manner than is likely to detect error early, remediate it and correct it before ensuing harm spreads beyond the confines of the laboratory. Not to belabour the point, but we have groups that look at whether or not an organization’s Quality system is intact: we call them Certification Bodies.
So while the accreditation bodies often view certification bodies as their inferior and evil twins, the reality is that by and large to two groups provide the same function. And if that is the case, do we really need both?
From my personal perspective, I find that the Certification Body that visits CMPT on an annual basis, provides a demanding review of our Quality System and Quality Control. They don’t have the time or experience or knowledge to sit and watch as we go through the tasks related to planning or creating or transporting our PT samples, no do they sit in as we go through the evaluation process. They find and examine the secondary evidence that supports that we are doing our tasks well. Our adoption and regular assessment to ISO 9001:2008 has made us very effective PT provider.
At a certain point medical laboratories authorities may want to ask if assessment of Quality and Competence can only be done one way.
Monday, April 29, 2013
The Big Value of Smaller Conferences
In the Quality arena, especially in health care related organizations, it is critical that we distinguish between what we want and what we need. There are all sorts of things that we would like to have: a better and more extensive Quality library stocked with both on-line and paper books, new Quality platform software, a new statistical software package, a 5 member Quality Team, opportunities to visit other organizations to learn from their Quality system, and all the time necessary to visit every Quality oriented blog and web-site on a daily or weekly basis. And if you can do all those things, congratulations, you are extremely well-funded, far beyond most of us.
But there are some opportunities that do not fall into the “nice-to-have” category, and much more in the “need-to-have”. You need to have the time and where-with-all to be doing some internal audit process. You need to have the knowledge and skills to have some form of regular Quality monitoring such as Quality Control and/or Quality Indicators. You need to have some form of Continuous Improvement program. And perhaps the most important, you need to have a mechanism that will enable effective Continuing Education, for as many of your people as possible.
And that brings me to the topic of conferences. There are many organizations that are giving up on sending people to conferences. Travel is too expensive, the amount of social time as compared to learning time is too great, and the amount of tangible take home information can be too little. Sending your people to an education oriented conference can be heavy on the cost and weak on the benefit, unless some work is put into selecting the right meeting with the best chance of return value.
From my perspective, big international conferences that attract people in the thousands are great for networking and maybe for some highly selective trade-show information, but generally are very low on the education and information side. The crowds tend to be too large, the number of concurrent sessions make getting to the sessions you want very difficult, and getting direct contact with the informed faculty is almost impossible. All too often the faculty-to-participant ratio is around 1:100, giving little opportunity for meaningful conversation. What may be useful for the individual person interested in networking is less satisfying for the person there to pick up tips and ideas.
At the same time, the small local meetings can have limited value in the other direction. Local meetings usually mean that people are expected to attend and work at the same time. The small local workshops tend to be short half-day events, usually on a single topic only, and usually with a limited faculty. Good if the topic on discussion is the topic that you are interested in, but otherwise you just have to wait your turn. Faculty-to-participant ratios tend closer to 1:50, better than the former, but still not very effective.
So does that mean that meetings are complete waste of time? Not so. I think what it argues for is the intermediate regional meeting. The meeting of 150-250 people is actually an ideal size. It is large enough to attract a faculty of knowledgeable speakers on a subject theme, but at the same time it is small enough that folks get to chat and brain storm in reasonable sized groups. It is large enough that you can meet and network, but small enough that you can actually get to know someone beyond just sharing a business card or Linked-in address.
In October 2013 we will be hosting our POLQM Laboratory Quality Conference. We anticipate about 200 laboratory technologists, residents, students and pathologists, all interested in laboratory Quality. Many will be from Western Canada, but there will be folks from across Canada and the Pacific North West. We already know of some people coming from outside North America. It will be a good group to network with. The faculty, all knowledgeable experts, will be there in ideal ratio, about 1 faculty to about 8-12 participants; a perfect opportunity to pick some brains. The location (Renaissance Hotel in Vancouver) will provide a comfortable environment conducive to good conversation. And there are sufficient sponsors for those that want some tradeshow experience and the opportunity to garner a variety of laboratory supplier information.
If you want a cast of thousands, this is probably not the meeting for you. But If you want or need an opportunity to network, discuss, and learn this is going to be a VERY good meeting.
It’s like Mick said: “You can't always get what you want, but if you try sometimes well you might find you get what you need”.
For more information visit www.POLQM.ca
Thursday, April 25, 2013
PT Bonus Opportunities: would your laboratory benefit?
Frequently we talk about the benefits of Proficiency Testing as a method for the detection of systemic error in laboratory testing, especially as part of the examination phase. Once you accept that Proficiency Testing challenges have been thoroughly quality controlled and are highly reliable, then it becomes, as my old calculus professor used to say, intuitively obvious that the greatest probable cause of deviation between a clinical laboratory’s result and the PT program’s result is some form of problem within the clinical laboratory. It likely is a slip or distraction by someone in the laboratory’s testing chain, but it may reflect a larger systemic error that is otherwise is being not recognized or is under-appreciated. Discarding deviations in PT performance can be lost opportunities for improvement.
But recently we had two interesting results come to light that reinforce that systemic error in the testing pathway detectable with proficiency testing materials can come from all sorts of places.
As per our normal routine, our PT coordinator was checking the laboratories that had not yet sent in their results 48 hours before the due date, and found one such laboratory and contacted them. [As an aside we can provide that extra level of service because we are a small program. Large programs with thousands of participants could never provide that extra assistance.]
The laboratory checked their records and came back on the phone and said that the problem was on our end, because the laboratory had never received our samples in the first place. We were the problem. So a check was made through the courier service and what was found was that the samples had been delivered on time as committed, and that the delivery way-bill had been signed off within the laboratory. A call-back was made and sure enough, the box was sitting in the refrigerator where they had been placed, un-opened.
The story has two messages. First, if this happened with our package, this could have been a one-off by someone who simply forgot (call that a human slip) or perhaps this happens more commonly than the laboratory is aware (call that a system error). Second, our system informs the laboratory on the day the package goes out. If there is going to be a problem it would be captured within 48 (max 72) hours. If someone had called us and checked, the box would have been found immediately. That this didn’t happen either means that a distraction resulted in the call not being made, or that the laboratory has an inventory-control problem which needs checking.
Either way the point is that even without being tested, this PT shipment has resulted in detection of two problems (is it OK to call them errors?) that the laboratory now has the opportunity to check-out. Either they were a chain of simple human foible, or they were a manifestation of failures in the delivery handling and monitoring procedures.
A second story is similar, but starts not from us contacting the laboratory, but the laboratory contacting us with an apology for a 5 day delay in submitting results. Apparently, there usually was a Quality technologist whose job it was to submit PT challenges reports, but that person had recently retired and no one had yet been appointed and so the job was “slipping through the cracks”.
We understand that people do retire; that is called “business as usual”. But if the PT reports are not being submitted, are there other tasks that are not getting done. If quality control testing is being delayed, and reagent defects are not being identified, then how many hours are going to be lost having to remediate erroneous test reports? Or worse, what if quality control testing isn’t being done at all?
My point is that PT samples are more than just a material to challenge the examination phase of laboratory testing. They are known and traceable and regularly received materials that can be used to monitor every aspect of the laboratory cycle. Usually everything works as it should, and sometimes it does not. It is when things do not go well that these “safe” opportunities arise for checking for system error.
Opportunity accepted or opportunity ignored?
Sunday, April 21, 2013
Clinical Microbiology Proficiency Testing (CMPT), the PT program that I chair, has just successfully undergone our 10th annual assessment visit by our certification body (SAI Global). We are the only Proficiency Testing program in Canada or the United States serving medical laboratories with a continuous 10 year track record of successful external evaluation.
Congratulations to us.
Voluntary oversight is an interesting activity in which to be engaged. In Canada, at least for the time being, proficiency testing bodies providing services and challenges for medical laboratories are not required to be externally audited by any agency in any jurisdiction. We can be accredited to ISO 17043:2010 if we want, or we can be certified to ISO9001:2008, or we can do nothing. It is our choice. If some authorities know what we choose, nobody officially seems to care. To the best of my knowledge, we have one organization that is voluntarily accredited to ISO 17043, and there is one organization certified to ISO 9001 (that is us). The remaining programs, large and small have to date decided that oversight provides them no particular benefit.
I suspect that the laboratories to which we provide service see the world a little differently. When we surveyed our laboratories, they reported that most see our certification as providing evidence of our Quality and provided a basis for trusting our Quality and Competence. That being said, there is no evidence that laboratories make any decisions in which PT program they will participate, based on Quality oversight. Further, there is little doubt that if the cost of our program was perceived as too high, or if some of our “competitors” decided to become more aggressive, many of our laboratories would drop our program. Loyalty goes only so far.
More to the point, I suspect it would be exceedingly difficult, to develop any evidence that would suggest or support that laboratories that participate with over-sighted PT programs make fewer errors or provide a higher level of patient safety.
I will take this one step further. I am aware of only one medical laboratory accreditation body in Canada that has sought external assessment, and I can see no evidence that would suggest that laboratories accredited to that program are safer or better than those accredited by non-oversighted accreditation bodies. [That being said, we do have some anecdotal evidence that laboratories with NO accreditation may be inferior].
So there is a reality that if we think we are following our Quality journey for some tangible benefit to patient safety or medical laboratory improvement, we probably would be wrong. The benefits of Quality lie almost exclusively for us, with some intangibles that go outside the house.
First off, by working through a structured quality process we catch our mistakes earlier, and prevent most from repeating. By learning from our mistakes and keeping our errors in–house, we save substantial amounts of time, and energy and money. I estimate this saves our bottom line probably 5-7 percent.
Second, we are a real-life example of the phrase “Quality improves Culture and Culture improves Quality”. CMPT has a very powerful Culture of Quality. After 10 years, our Quality system is at the core of everything we do. It is the basis of our innovation efforts, our internal communication, our discipline of continual improvement, and our dedication to providing the best service that we can for the laboratories that work with us.
When we plot our culture map we are very high on Market (customer awareness) and Adhocracy (innovation) and Clan (intergroup dynamics) and a lower on Hierarchy (internal leadership and requirements). We are exactly where we want to be.
There are some lesser tangibles that result from our Quality strategy. We get a lot of recognition from other countries and are perceived by many as a leader in method development and innovation for proficiency testing. Considering that we are really a small group, that recognition is a real plus and driver for us.
And perhaps very importantly, had CMPT not embraced Quality, then we would never have reached the point of opening our sister program, the Program Office for Laboratory Quality Management, and would not have experienced all the huge pluses that have accrued from that.
With all that said, we are absolutely certain that going down the Quality path when we did has given us a level of success that is off-the-scales.
A truly brilliant decision of which I have absolutely no doubt.
Wednesday, April 17, 2013
Twelve Core Quality Messages in Plain Language
Over the last while in this blog I have raised the challenge to medical laboratorians that it is time to start changing our report writing style.
In the past our reports have been written by us, and by-and-large not with the customer in mind, but rather to suit ourselves. As much as we like to lay the fault with the Laboratory Information System (LIS), we were the ones that picked the words and style. We are all to blame, but I would argue that the two most guilty have been the tissue pathologists and especially microbiologists.
In microbiology our reports are so full of jargon and subtlety that I am surprised that anyone can figure out what we are saying. We insert microbial name changes at the drop of a hat, we use terms like “GAS” and “GBS” and “no anaerobes detected” and “no significant growth” and “normal flora” as if everyone understands what they mean. Newsflash. Few physicians and virtually no every day patients have a clue what any of those terms mean and creating reports with those terms is not helpful.
But if that is a problem, pity the poor person trying to get a grasp of Quality without taking a course. Our area of interest is so loaded with jargon and gibberish I suspect that most of us have at best only partial understanding. Agile, scrum masters, waterfalls, kanban, poke yoke, green belts, black belts, DMAIC, scorecards and OFIs; it goes on and on. If we want to develop a consistent concept of Quality, then perhaps we need to start developing a more direct, interpretable language that can be understood as clear language. [For related reading see: http://www.medicallaboratoryquality.com/2011/09/quality-and-tower-of-babel.html ]
Consistent with that message, recently I was to prepare a lecture for some students that have an interest in international public health, but have essentially no laboratory contact and even less contact with the concepts of Quality Management. A tough exercise at the best of times, but to provide the information and immerse them in Quality terminology would be a mind numbing, eye-glazing, nap promoting experience. I have endured those lectures before. Stone cold killers.
So this gave me an opportunity to see if I could create a lecture in a way that would allow them to at least grasp the concepts of Quality and remain if not alert, at least conscious. I figure if I can get the concepts across in plain understandable terms I can make Quality relevant. At a later time, for those who might develop an interest to discover more, we can start to introduce the specific terms.
To that end I created the following, which I think covers the main messages that we think about when we decide to implement a Quality system. Each message is connected to a specific Quality Process.
I call the following:
Twelve Core Quality Messages
• Quality is everybody’s business, but Management carries
80 percent of the load
80 percent of the load
• Employees need to know what you believe
• Employees deserve to know what their job is
• If performance is not right, it is wrong
• Everyone should read off the same page
• Think before you Do; Check and Fix what you Did
• The earlier you detect mistakes the better
• The error you prevent is no longer an error
• Learn from your mistakes
(OFI and CONTINUAL IMPROVEMENT)
(OFI and CONTINUAL IMPROVEMENT)
• A second look is very helpful
• An outside look is better
(EXTERNAL QUALITY ASSESSMENT)
(EXTERNAL QUALITY ASSESSMENT)
• If Management doesn’t check, then it doesn’t know
Maybe I am deluding myself, but I think this is something that I can use, and might work.