Showing posts with label Quality Control.. Show all posts
Showing posts with label Quality Control.. Show all posts

Sunday, February 3, 2013

Quality and the Research Laboratory -one more time with vigour


Last week I had the opportunity to make a presentation to a group of laboratory sciences graduate students.  My topic was on Quality, a topic that I suspect was pretty marginal in their sphere of knowledge or interest. 

I had two agendas; first that they should be aware that Quality can be viewed both only as a subjective characteristic based on market influenced notions of value and craftsmanship, and specialness, and at the same time as an objective measurable based on specifications, and requirements, and commitment.  The second was to introduce to them not only that Quality in the objective sense has a role to play in every research laboratory, but to go further, the absence of Quality awareness makes everything that they think and do, null and void.

I started with the notion that there are tiers of Quality, starting from the base of Quality Control, then Quality Assessment, and finally Quality Management. 



To be fair, I acknowledge that not all laboratories will attain a level of achievement that includes a full Quality Management System.  Many clinical laboratories have at best a perfunctory Quality Manual and a pretty iffy Document Control system or Process Control and unfortunately most clinical laboratory directors do little Management Review.  But I am also aware that these are by-and-large completely absent in research laboratories unless their funding agency demands that they demonstrate they follow Good Manufacturing Practices (GMP).   

Many researchers scoff at the notion that they should participate in any form of Quality Assessment, thinking that exclusively means some form of proficiency testing or inter-laboratory comparison, and they forget about the simple basics of internal audit, and competency assessment, that let them know whether anything is being done the way that they think it is supposed to be done. 

But the sad reality is that even the most basic Quality Control is by-and-large absent.  All too often basic assessment of equipment and reagents and supplies through use of control materials barely occurs, and when it is done, it rarely is used critically.  There is little use of control charts (sometimes referred to as Levey-Jennings) and a near complete absence on its interpretation. 

I was not alone in the room being knowledgeable on the subject.  One of the seminar leaders lobbed be a really good softball question.  Is there not value in reproducibility as a reflection of accuracy?  If a value is tested multiple times and the same value is achieved, doesn't that give credibility to the value?   Tempting, but sadly, not.  What it did was open up the conversation on accuracy versus precision and bias.  It reminds me of an age-old description of surgeons, confident, quick, adept and wrong. 

Research is a critical part of health progress, and to be fair there has been great progress of the last couple hundred years.  But investigation is slow and erratic, and expensive beyond expensive.  For every step forward, there are dozens of steps back.  That is the nature of unraveling new knowledge.  But we make the situation worse and not better when graduate students don’t know of understand about the roles of Quality Control and Competency.  I would argue that for every laboratory that uses a standard piece of analytic equipment, there should be availability of programs that provide materials to ensure that the equipment is being used properly.  You could call that a variation of proficiency testing or competency assessment.  I would also call it common sense.  I would further argue that an aware funding agency would require the regular use of such challenges, perhaps even tying funding continuity to performance.

Our universities and training centres have an obligation to teach and guide graduate students to mold them into being excellent investigators.  When we leave the very basics of Quality out of the experience and equation, we are just perpetuating our folly, not fulfilling our obligations for investigation improvement. 

We have another session with the group next week.   

Monday, August 13, 2012

Proficiency Testing in the News



It is always exciting to see stories in the media on topics that are of interest, even if the stories are not exactly positive.  It means that the shining light of public scrutiny approach to Quality is at work.

This story was first reported by a news affiliate in Columbus Ohio and was then widely communicated by Robert Michel in the Dark Daily [http://www.darkdaily.com/ebriefings#axzz23SJDJout].  It brings light to a prominent academic centre in the United States that has run into difficulties because they apparently “inadvertently” referred proficiency testing samples to another laboratory.  

Before going on, let me say that Dark Daily is one of the most valuable medical laboratory Quality oriented sites on the web.  It is highly informative and a must read.

The story does not give a lot of details, so I will describe the following discussion based on my own experiences rather than on the specifics of what did or did not happen in Ohio.

First of all let me start with the following bold statement.  Laboratorians as a collective have always shown an ambivalent attitude towards Quality Control, and Quality Assessment.  We know that QC and QA are critical components of ensuring confidence in laboratory performance, but that is pretty much tempered by an overwhelming libertarian nature that resents intrusions into our professionalism.  We may do QC and QA but we don’t like it. 

Many laboratorians have a flexible approach to demonstrating laboratory Quality Given a choice, many laboratories take the approach that the best Proficiency Testing program is the cheapest with the least number of samples and the most simplistic of challenges.  Others (I suspect and trust as not many) unfortunately go one step further taking it as far as they can through gaming and cutting corners.   You might call it being deceptive or maybe even dishonest.

In the “olden days” we used to regularly have laboratories that would send samples to reference laboratories for testing and then report the results as their own.  Even today, we have laboratories that don’t fill in forms, don’t adhere to deadlines, and quibble over ambiguities that they uniquely envision, and then gripe and complain.  It’s pretty much a “get out of our hair:” approach.  So with this as background, I can understand why officials at CMS would take a dim view towards finding laboratories that apparently are referring samples to other laboratories.  

In defense of the laboratory, I will point out that many academic centres have huge numbers of samples received every day, often reaching into the millions per year.  And life has become more complicated these days with organizations relying on transporting samples into specific regional centres rather than doing testing locally. So if a laboratory sends a sample or two  to a place it was not supposed to, the error rate would be pretty low (six sigma metric >5.5).  

But if they sent the sample to another laboratory, and then also reported the other laboratory’s results as their own, that is a big problem.

It is possible that there is another issue here; one where there is an absence of appropriate reporting choices.  The laboratory may be stuck with a form that they cannot fill in without creating another problem.

In our program we do have laboratories that receive our package of samples, which may contain challenges for certain tests that they would normally perform.   With CMPT our challenge menu is set annually and the package may include certain tests that are not normally performed by the laboratory.  For example they may do clinical bacteriology, but ship enteric samples (feces) to another laboratory.  They may do bacterial identification, but not susceptibility testing.  In those situations we cannot customize the packages, but we do not expect them to perform tests that they would normally not perform. 
 
We have a number of solutions.  The laboratory can complete and submit their report with the designation “SNNP” meaning “sample not normally processed”, which once confirmed results in an automatic “ungraded” sample.  

If they would normally do only a preliminary investigation and then send to another laboratory for completion, we make it clear that they provide us with the preliminary information that they generate and report that they would send it on. Clear instruction is given that they should not send the sample onward.
If they want to perform the challenge on an educational, self-interest, but not reported basis, they can also do that, with the result remaining as “ungraded”.  

Maybe these were options that the laboratory in Ohio did not have.

So at this point we don’t know if Dark Daily is reporting a problematic-shipping issue, or a caught-in-deception issue.  I hope for one and fear the other.

I will follow Dark Daily to see how the story unfolds.