Sunday, May 29, 2011

A Book Report: QA and the Pathology Laboratory

Professional reading literature falls into a interesting range these days. At one end of the spectrum is the ethereal, electronic opinions in the form of web-journals. Of more substance, but just as fleeting are electronic rapid access peer-reviewed journals - articles of substance with extremely short shelf-live. Next along is our traditional journal manuscript in printed form. Articles usually of substance accepting that their value is generally transient. Journal articles reference impact is short, especially when compared to the longevity of the paper upon which they are written. And at the other end is the textbook; a reference usually that is out of date before it was published, but which usually contains principles of a more general nature that provide a structure to learning.
Somewhere within that spectrum lies the books published by CRC Press. These look like text books and generally have chapters with a reference structure, but they tend to be very current with some intermediate live value. Quality Assurance in the Pathology Laboratory: Forensic, Technical, and Ethical Aspects, edited by MJ Bogusz and published by CRC Press (2011) falls into the category.
This an intermediate sized book (374 pages) in 10 chapters, that gives fairly broad coverage to the subject. There are chapters for the pathologist, the chemist, the qualitologist, the educator, the standards development folks and the assessment bodies. There is material on pre-examination, examination, interpretation, and post-examination phases. There is a lot on laboratory process and some on point-of care.
You might think that a fairly short book with so broad a mandate would have more misses than hits. But you would be wrong. The success is in the selection of authors, a mix of Americans and Europeans, all expert in the field.
From my personal perspective, I found the chapter on Education and Training in the Changing Environment of Pathology and Laboratory Medicine by Gian Cesare Guidi and Giuseppe Lippi as the most enlightening. I thought that it was going to be about programs for training pathologists about quality assurance, knowing that would be a very short chapter because it would be comprised of one sentence: “There is insufficient evidence of any substantial training in quality and its techniques for pathology residents.” The end.
Instead, the authors did a very credible survey of international curricula in training at the undergraduate level, the professional school level, and the postgraduate level, for both medical and technologist graduates, and makes the argument that there is room for new and complex changes in core curriculum for laboratory professionals. It is an excellent read for anyone engaged in training and education.
The weakest chapter, but still useful was on Quality Assurance of Quantification using Chromatographic Methods with Linear Relation between Dose and Detector Response by Georg Schmitt and Rolf Aderjan. Actually (and fortunately) the chapter had virtually no relationship to the title; rather it provided a survey of quantification tools such as calibration, precision, bias, and of course MU.
Whether the authors intended it or not, they made one of the most important and telling sentences in the whole book. “Measurement uncertainty is an important part of the reported result. According to ISO 17025, the MU must be reported.” I can’t thing of anything more telling about MU. Its only purpose in the medical laboratory is to satisfy the statistical whims of some accreditors. (I feel a rant coming on; more on this later).

Let me be clear. I don’t know any of the authors personally. I don’t know the editor and I have no financial links direct or indirect to CRC Press. (I co-edited a book on STDs with them in 1997 and co-wrote a chapter on Yersiniosis in 1988). But I would recommend this book to all laboratorians. It provides an excellent survey of quality assurance information, and good insights, and will be a useful reference for at least the next 5 years. 

PS:
For those of you still thinking about our Quality Weekend Workshop, time is getting short.  June 17 is just around the corner.  Register NOW!!
www.POLQMWeekendWorkshop.ca

Wednesday, May 25, 2011

Education Opportunities in Quality (2 new and valuable finds)

I am putting together my thoughts about what’s new in Education opportunities in laboratory quality and I came across a new book “Quality Assurance in the Pathology Laboratory: Forensic, Technical, and Ethical Aspects” edited by MJ Bogusz and published a few months ago by CRC Press.  Let me say at this time that (a) it is an excellent book and (b) I am going to offer an unsolicited critique on Saturday, and (c ) I suspect that it will be a book that we soon will find on most of our bookshelves.  But more on this later.

Thinking only about Canada laboratory education has always been an orphan subject.  Every once in a while, over the last 40 years there would be some discussion on the subject but it was rarely done in anything other than an uninstructed fashion.  Until 2003.

In Canada, the publication of ISO 15189 in 2003 generated a lot of interest, perhaps more as a curiosity, but almost right from the beginning organizations recognized a new niche for educational opportunities.  Organizations such as the Michener Institute in Toronto started to provide educational courses, primarily for folks living in or around Toronto.  We saw opportunity for a broader audience by providing our on-line UBC Certificate Course in Laboratory Quality Management which has become both popular and well received.  Both of these courses were primarily designed to provide basic but key information to provide working laboratorians with the knowledge to take on roles for the newly created positions called Quality Managers.  
That niche has become supplemented with a variety of shorter term on-line webinars from CLSI and American Society for Quality and the Ontario Laboratory Accreditation (OLA) program.  
To date we are not seeing that niche becoming saturated.  there are many folks still seeking that information, now not only within Canada but abroad.  What began as a program focused on technologists has become popular with administrators, laboratory pathologists and laboratory pathology residents.  

But the next dimension has already begun.  We started to receive requests to provide opportunities from more extensive study and opportunities to become engaged in research.  Folks around the world are beginning to recognize that Quality in the laboratory is not just a flavor of the day, but is indeed the new discipline we have been talking about.  And a focus for attention and study and career.  This is very exciting.  Today we are finding a smattering of Masters level programs directly addressing laboratory quality in the US and soon ours in Canada, along with some others that may provide information on Quality as part of a MBA or a Masters in Health Administration or Masters in Public Health or in Leadership.

This week I was introduced to what I think is something that Europeans do so well.. a Masters program in Quality in Laboratory Quality provided by a consortium of SIX universities from Portugal, Spain (2), Norway, Poland, and China (!)  It functions interestingly as a on-site (as opposed to on-line) program with a requirement for study, and research and a thesis.
It seems to cover a broad spectrum of knowledge; indeed on paper it looks almost identical to the subject matter that we are proposing for our program.  From discussion I understand that it has been running for 2-3 years with about 30 students a year, which would put very intensive demands on teaching faculty and research facilities.
As I mentioned, I had not heard anything about this course until a few days ago.  I suspect that I am not alone.

For folks thinking that this might be an interesting focus of study today, I recommend you to check out http://cursos.ualg.pt/emqal/

For those that are still thinking about it, but not ready to commit just yet,I am looking forward to our program starting in 2012.

More on Quality Assurance in the Pathology Laboratory in a few days   .

Saturday, May 21, 2011

Competancies

This entry is linked to specimen rejection, but not exactly in a linear fashion.

By way of circumstance I was invited to attend a meeting in eastern Europe with 5 other “EQA Experts” to assist and advise an initiative to develop a new national EQA program in a country looking to improve its quality monitoring and improvement of medical microbiology laboratories.  The meeting went as anticipated, some presentations (all well received), a lot of discussion with a lot of agreement points, along with some ongoing barriers, mostly of a “small-p” political nature.  At this point there was general sense that none of the barriers were insurmountable.

But that is not what I want to talk about.

What was interesting to me was to see how differently very established EQA programs can be at some very basic fundamental philosophical levels.  All of our programs were of a similar age (25-35 years) and all in developed countries.  All of us, while not being wealthy (NOBODY gets rich on EQA!) all of the programs were financially stable, and had well established infrastructure.  All the programs were headed by either senior experienced staff with similar backgrounds with university affiliations and clinical laboratory experience and expertise.  Some had more patient experience than others.  

So you would not be surprised that there was a lot in common with the programs.  Everyone recognized that EQA and Accreditation often worked hand-in-hand as Quality Partners, each with their independent mandate.  Everyone saw that given a choice, EQA was best focused on education and improvement rather than on laboratory censure or closure.  We all saw EQA much more as a CARROT than as a STICK.

But there was a core difference with essentially half the group on one side and half the group on the other.  One side (I will call them the “technical” or "archaic" group) perceived EQA as a tool only to address measurement of technical competency, while the others (I will call us the “progressive” or “clinical competency” group) saw EQA as a broader measure with a mandate to monitor to the extent possible, the delivery of clinically relevant and timely accurate information. [We recognize that timeliness is probably the hardest attribute to measure, but as on-line systems improve, timeliness is likely to become a part of the assessment process sooner rather than later.]

While we all develop tools to measure examination phase  microbial identification and susceptibility testing or serological or toxin testing positive/negative testing, the technical group stopped at this point  There might be an inclusion of a brief clinical history statement, but it was there mostly to provide some context for interest.  

On the other hand, the clinical competency group integrated the clinical context much more deeply and introduced pre-examination and post-examination variables into the laboratory monitoring process.  Factors such a patient age and clinical circumstance are integrated into the history as factors to be taken into consideration in the investigation of the sample.  Pre-examination variables may be added in.  Post-examination issues such as interpretation, and report construction, and inclusion of informative or cautionary notes, and clinical vocabulary are included in the report process.
In this approach the laboratory is expected to distinguish between known or probable pathogens from contaminants or normal resident flora.  Susceptibility testing needs to take into consideration age, and wound location.  Do the laboratories report relevant information of public health or infection control.  

The point is that over the last 25 years, microbiology laboratories have progressive become more like biochemistry and haematology laboratories.  The technical aspects, which used to depend on technologist knowledge and skill have been overtaken by machines.  EQA technical testing is no longer a challenge of laboratorian skill, it is more an extension or variation of quality control and the use of control materials.  The human skill now lies far more in the areas of clinical relevancy, sample acceptability (I mentioned before the link to rejection criteria) and the appropriateness of the information that the laboratory provides to the clinician.  
And that is the area upon which we need to focus our education and improvement energies.

If EQA wants to remain relevant to the laboratory and continue to play a role in improving patient care, then it is time for all programs to move on.  

Adapt or die.



Sunday, May 15, 2011

More Rejection.

Sometimes rejection is a good thing, at least when thinking about laboratories protecting the quality of their work and output.  Rejection criteria are created to protect the laboratory, the clinician and the patient.    The laboratory protects itself from doing needless work and work with little chance of clinical utility.  The clinician is protected from receiving information that might be non-contributory, or worse, might lead to wrong decision making.  The patient is protected from spending money for additional tests and therapies that would not have been ordered had things been done right the first time.

In a litigious society working on poor quality samples has a higher probability of costing the laboratory more through legal fees than than it recovers in test revenues.


In the non-litigious society, reporting results that are either clinically-irrelevant  or clinically inappropriate damages the laboratory’s credibility.  Frequently we hear in laboratories in developing countries that the reason that so few samples are sent to medical laboratories is because the doctors don’t trust the results.  By tightening up on the quality of samples that are accepted for testing results in increased confidence in the results and increased samples being processed.  


But as with most things in life, rejection criteria are not a “one-size-fits-all” decision process.  I can give you a few examples as they appear in the microbiology laboratory.  

The first example is one where the sample is accepted, but the request is rejected.  Frequently we receive urine culture samples submitted with a requisition that says “test all organisms for antibiotic sensitivity” or “test all organisms for urease activity”.   Usually these requests are motivated chronic recurrent symptoms of infection or stone formation.  Well that sounds good and the information may be useful if the samples are pure (one isolate) or near pure (2 or 3 isolates).   But there is no value in testing every isolate in a polymicrobic sample with 4, 5 or 6 different isolates.  There is no reason to do high level testing that you know will not contribute relevant or useful information.  

There are many ways of dealing with this sort of rejection, and usually it requres having a conversation with the ordering clinician, but sometimes the clinician has to understand that extra testing is not going to get done.

Another example is what might be called  cascade rejection.  This is where the sample comes with a request for all isolates to be tested for susceptibility to satifloxacin. This is a situation where as soon as the “new drug” shows up in literature, requests for testing show up in the laboratory.  This is where antibiotic stewartship needs to kick in.  If the drug is not on the facility formulary list or the current laboratory list, it is not going to get tested, unless there is a research protocol to which the laboratory has agreed to participate,  and this information is not going to be available for random and regular use. 



Two last quick thoughts.  


First,  Most laboratories do not have the computer power to build rejection criteria into the laboratory sortware, which will be terrific when it happens.  When that happens all sorts of complex “if...then” layers can be implemented, and the system will label the sample and block steps all along the sample’s path of workflow.  Until that time comes, the application of rejection criteria falls to the memory of frontline workers with lots of work pressures. Creating straightforward criteria have a reasonable chance of being implemented.  Complex criteria are have little chance for success unless senior leaders personally insert themselves into the intervening process.  


The second quick thought is that proficiency testing programs can and should  provide programs to challenge rejection criteria.  More on this later.

Friday, May 13, 2011

Not all rejection is bad

In a recent proficiency testing sample for parasitology we submitted a sample with known artifact from poor handling (freezing).  I can’t say we sent this one totally on purpose, but we have in the past sent samples clearly designed and intended to challenge the laboratories’ rejection criteria.  What we observed was that less than 5% of the laboratories included a cautionary note about the quality of the sample.  
What I would have anticipated was that a substantial number of reports should have come back containing a cautionary note to the effect: “Caution: sample demonstrates artifacts suggestive of improper collection or handling.  Please submit an alternate sample”.

Proficiency Testing is supposed to challenge normal sample handling.  We hope that what we have seen is a failure of laboratories to handle this sample like a normal sample, rather than suggesting that  more than 95% of laboratories would generate a report without commenting on suboptimal quality of the sample.   

Laboratories know all too well that a number of samples reach the laboratory in an unacceptable condition.  They may have been collected incorrectly, or perhaps put into the wrong container with the wrong fixative or anti-coagulant.  They may have been collected properly, but not labeled in a way that anyone could say with any confidence from which patient they came.  Or they may have been collected properly, and labeled properly, but transported in a way that would not protect the sample from complete or near complete decay.  Unfortunately for all those reasons, the samples all fall into the same category: junk samples.

Working on junk samples is not only counter productive, it is dangerous because  reports virtually never include the cautionary statement “please note that this result has little chance of being accurate or relevant due to the poor quality of the sample.”  

ISO15189:2007 includes the requirement for sample Rejection Criteria (see requirement 5.4.8).  Laboratories have an obligation to establish the requirements of a sample, and to reject samples that don’t meet the requirements.  It is for the good of the patient, and for the good of the clinician to avoid making decisions based upon faulty information from faulty specimens.  

And we believe that PT programs have an obligation to intermittently challenge the laboratories’ decision making.  Is there any difference between demonstrating that a laboratory tested a good sample and got a wrong answer or demonstrating that a laboratory tested a poor sample and got a poor answer?  Well there is a difference, but the bottom line is the same.  The clinician receives a report with information potentially damaging to patient care.

At this point, let me point out that even the international standard misses an important consideration of sample rejection.  Not all sample rejection is the same.  There are different levels of rejection, all of which are acceptable, appropriate and to be expected in their respective situations.  And proficiency testing schemes are ideally placed to challenge all these level.

More on this later.
M


PS: For those that want or need more information on the international standard, see: The ISO 15189:essentials - A practical handbook for implementing the ISO15189:2007 standard for medical laboratories.  November 2010.  CSA Standards.
available through www.ShopCSA.ca

Assessment , Assessment , and Assessment

Note: Some of you may be aware that Blogger had a crash over the last 24 hours.  Recent posts were lost.  This one was posted on Wednesday May 11.  I am reposting it. 

If you buy into the concept that we learn from experience, then it is an easy step forward to accept that there is much value in assessments that objectively measure what we have done in the recent past.  Objective assessment distinguishes between what we have done and what we believe we have done and points out, sometimes painfully what we have not done.  .  It is an integral component of Quality and learning and improvement.  Today we are heavily engaged in the assessment process, in a variety of aspects.  

PART A.
A few weeks back I mentioned that we at CMPT has screwed up and were found to have a non-conformance in our assessment for compliance to ISO9001:2008.  We have re-organized and got our internal audit completed.  We identified the areas that need improvement and submitted our records and plans for going forward.  
Today we were notified that our evidence has been accepted and our certificate for re-registration has been granted, recognizing that we will have a visit to demonstrate that the action plan is indeed an action action plan.  Hooray for us.

PART B.
In many situations, (see above for example)  we can and should do our own internal assessments and learn our own lessons.  But there are some times when an  external assessment works better.  
Consider for example, in the area of education.  When students, including adult learners, take courses, before we can say they have studied and absorbed content, we do an assessment, sometimes referred to as an examination.  We don’t leave it up to the students to do their own personal assessment and then get back to us (at least most of us don’t do that!).  We measure their knowledge through pre-tested questions, and determine if their responses are consistent with the new knowledge that we expect them to have learned.  Internal personal assessment is an important self-learning tool, but when it comes to certification, personal evaluation is not enough.  We base assessment on external evaluation.  
I mention this because we are about to go into final exam season for our Certificate Course in Laboratory Quality Management.  
We are confident that everyone will demonstrate their new knowledge in exemplary fashion.

PART C.
But just as we don’t rely on students to evaluate their knowledge, we don’t depend on ourselves to evaluate the course that we teach and the manner in which we teach it.  Our certificate course is in its eighth year, and has reached a level of mature stability.  We are confident that we know what we are doing.  It is now time for use to find out if others agree.
In part we have done this regularly throughout because we have done student surveys after each module and at year’s end almost from the beginning.  And we have made changes each year, in large part, based on the surveys.  But at some point, that approach is not enough.  Student participants tend to have a short term view, and their assessment as valuable as it may be, is an immediate reaction in a manner that they may perceive as not totally anonymous.  So they may be telling us what we want to hear.
So this year we have brought in the expertise of a group that spend a lot of their time doing external evaluations of courses and programs.  They look at content, and delivery, and interview faculty and students.  They look at course objectives and determine if we are meeting what we commited to meet.  
All of this is voluntary.  We don’t have to do this.  The university does not require it.  Our department does not require it.  But we require it.  How can we give a message of commitment to Quality if we don’t take the extra step.  
So I am looking forward to the exercise, and expect that we will do well.  I will be surprised if we are deemed as so perfect that we have no areas for improvement, and will be similarly surprised (perhaps hugely surprised) if we end up at the other end of the spectrum.

All this takes time and effort and energy and money (TEEM expenses).  It  is all part of the Quality exercise. 

Wednesday, May 11, 2011

Asssessment and Assessments and More Assessments.

If you buy into the concept that we learn from experience, then it is an easy step forward to accept that there is much value in assessments that objectively measure what we have done in the past.  Objective assessment distinguishes between what we have done and what we believe we have done.  It is an integral component of Quality and learning and improvement.  
Today we are heavily engaged in the assessment process, in a variety of aspects.  

PART A.
A few weeks back I mentioned that we at CMPT has screwed up and were found to have a non-conformance in our assessment for compliance to ISO9001:2008.  We have re-organized and got our internal audit completed.  We identified the areas that need improvement and submitted our records and plans for going forward.  
Today we were notified that our evidence has been accepted and our certificate for re-registration has been granted, recognizing that we will have a visit to demonstrate that the action plan is indeed an action action plan.  Hooray for us.

PART B.
In many situations, (see above for example)  we can and should do our own internal assessments and learn our own lessons.  But there are some times when external assessments work better.  
Consider for example, in the area of education.  When students, including adult learners, take courses, before we can say they have studied and absorbed content, we do an assessment, sometimes referred to as an examination.  We don’t leave it up to the students to do their own personal assessment and then get back to us (at least most of us don’t do that!).  We measure their knowledge through pre-tested questions, and determine if their responses are consistent with the new knowledge that we expect them to have learned.  Internal personal assessment is an important self-learning tool, but when it comes to certification, personal evaluation is not enough.  We base assessment on external evaluation.  
I mention this because we are about to go into final exam season for our Certificate Course in Laboratory Quality Management.  
We are confident that everyone will demonstrate their new knowledge in exemplary fashion.

PART C.
But just as we don’t rely on students to evaluate their knowledge, we don’t depend on ourselves to evaluate the course that we teach and the manner in which we teach it.  Our certificate course is in its eighth year, and has reached a level of mature stability.  We are confident that we know what we are doing.  It is now time for use to find out if others agree.
In part we have done this regularly throughout because we have done student surveys after each module and at year’s end almost from the beginning.  And we have made changes each year, in large part, based on the surveys.  But at some point, that approach is not enough.  Student participants tend to have a short term view, and their assessment as valuable as it may be, is an immediate reaction and may be biased if they perceive our process is not totally anonymous.  So they may be telling us what we want to hear rather than what we need to hear.
So this year we have brought in the expertise of a group that spend a lot of their time doing external evaluations of courses and programs.  They look at content, and delivery, and interview faculty and students.  They look at course objectives and determine if we are meeting what we committed to meet.  
All of this is voluntary.  We don’t have to do this.  The university does not require it.  Our department does not require it.  But we require it.  How can we give a message of commitment to Quality if we are not prepared to go to the next step. (I know this appears to be inconsistent with Quality is defined as "meeting expectations".  More on this later.)

So I am looking forward to the exercise, and expect that we will do well.  I will be surprised if we are deemed as so perfect that we have no areas for improvement, and will be similarly surprised (perhaps hugely surprised) if we end up at the other end of the spectrum.

All this takes time and effort and energy and money (TEEM expenses).  It  is all part of the Quality exercise. 

Sunday, May 8, 2011

SWOT Priority Table for Medical Laboratories.

So I was thinking about SWOT analysis and came up with some ideas about a tool that one could develop to help the process of translating SWOT analysis information into a priority list to get the important repairative  tasks addressed first.
The attached is based on the following assumptions or principles.

  1. Laboratories with more than one area of weakness, or threat, or areas with opportunities may seek help prioritizing tasks.
  2. While all Quality procedures are equal, some are more equal than others.  (I can this the Animal Farm principle).  For example, if there are improvements to be made both in Management Review and also in updating the organizational chart, it is more important to focus on management review.
  3. If a procedure is at an acceptable level, it does not need any work (other than maintenance), but if there are tasks that need to be done, those at a level of severe deficit (threat) they need to be addressed first.  
  4. Those where there are weaknesses or opportunities come next.  
So here is how this works.
  • In the left hand column I have listed many of the areas and procedures  that a laboratory doing internal review should evaluate.
  • Each area should be considered as either being in a healthy condition (strength) or having a weakness.  The weakness may be bad enough to be a potential liability.  Or there may be resources available to address certain areas (opportunities).  It is conceivable that a area could be both a threat and have an opportunity at the same time.  
  • For each procedure I have put a "1" in the Strength column.  If after evaluation you want to change this to any value from 0 to 1 to 1 decimal point.  You can also add a value (0 - 1) in any or all of the other columns.  
  • The four columns can add to 1.0 or greater. 
  • The more you make the line worth, the greater will be its priority.
  • Different procedures have different inherent procedure priority levels (PPV).  The PPVs  are the one’s estimated by me.  If you think different values could or should be used, change them.  Again, the greater the value, the greater will be its priority.
http://dl.dropbox.com/u/173944/SWOT%20PriorityQD-0511.xls
 
Note: I have verified that this file works properly.  
Also note I have not  validated it as giving the best priority for addressing tasks.  
 
Feel free to use it, or experiment with it, or disregard.

Interested in your thoughts.

Thursday, May 5, 2011

SWOT

Recently I have been taking a close look at quality standards including ISO9001:2008, ISO15189:2007, and ISO17025:2005.  All of them have a lot in common.  First off, regardless of the decision for certification or accreditation, they are all mostly very useful reference sources for people who organize and operate laboratories.  Implementing quality management is a good thing to do.  Second, all of them recognize the importance of management setting the quality agenda based up information gathered through management review.  And third, all of them are future oriented documents that focus on ensuring the laboratory will be better tomorrow than it is today.  All that goes under the headings of planning, or prevention, or continual improvement.

Along the way, I noticed while all the documents make these points, there is one thing that they all seem to ignore or exclude, they are all weak on providing suggestions or recommendations for actually implementing business quality programs in an active laboratory.
So in that vacuum, I offer up my recommendation for inclusion of an organized look at the organizations Strengths, Weaknesses, Opportunities, and Threats as a powerful planning and improvement program.  Done correctly, SWOT analysis is an extremely useful planning and monitoring tool for laboratory management to apply on a regular basis.  

In CMPT I have taken to doing a formal SWOT every 2 years.  That way I get a chance to see if I am actually making progress or just satisfying immediate concerns.

SWOTs have a lot in common with ROOT Cause analysis.  You can do them both in 5 minutes if you want to.  In both instances you get out what you put in.  A five minute job satisfies the piece of paper and gets a tick on the assessment form.  A more formal open discussion will take a lot longer, but the outcome should be worth it.

There are probably hundreds of ways to work through a SWAO Analysis.  I try to keep it focused on the areas that matter most to me and my operation.  I try to focus on 9 topics: Management, Personnel, Facilities and environment, Quality System, Products and Services and Clients and Satisfaction, Awareness, Competition and Collaborations, and Finances.
For each of those areas I first think about what we have and how it can make us stronger, and if our current action is making us weaker.  Are there new opportunities to make things better, and if I don’t act on those or make the changes that I need, what can cause  jeopardy.  
Once I have created my list, I look at what tasks I have created and then put them in a priority order, recognizing that over 2 years I can work or 5 or 6.  

For  me and my program, the process has worked fairly well.  It has created an organized  structure for me to identify the things that we are working on now which include (a) capitalizing on non-EQA projects to create more diversity and a stronger financial platform, (b) increasing the analytes we provide (c ) aggressively finding a new location and (d) increasing the energy we put into ISO9001:2008 and (e) putting a temporary hold on implementing tje new standard - 17043:2010.  Maybe I would have identified these issues anyways, but the stucture helped make them more obvious.

And then to ensure that I actually do something, I make the whole thing public through my annual meeting and annual report, and put it on the agenda for year-over-year discussion.  I can dodge and weave, but in the process of open forum, I cannot hide.

The process has been 98 percent excellent as a forward driving device, but not 100.  I still have the same weakness and threat that I identified now 5 years ago.  I have no succession plan in place for me, and every year that looms as a greater issue.  More on that another day.

But to bring me back to  where the comment started.  All the quality documents talk about planning and review but do not identify this as a useful, indeed valuable, indeed indispensable tool.  Lots of time wasted space for other tools of dubious value (like uncertainty of measurement), but none for how to make my planning process more effective.

I need to do something about that.

Monday, May 2, 2011

A dismaying visit to yesteryear

I understand the challenges in Quality research, in every definition of the phrase.  In the context of MMLQR I mean that it is difficult to create new knowledge that is relevant to the study of implementation of Quality measures to the medical laboratory.  
Medical laboratories are at a new junction.  To move forward we need knowledge about what is working and what is not.  Are national or international standards creating an environment of reduced laboratory error?   Are Quality initiatives making medical care safer?  Can we draw new insights from past experiences to learn new and better approaches to communicate our message in a more constructive manner.  
We need novel ideas to answer basic questions that affect our laboratories and our role in care.

I commend all those those that try and I congratulate those that create useful contributory information.   I would like to say that I congratulate the authors of an article in the March 2011 edition of ASCP LabMedicine.  But that would not be very accurate.

I was interested in reading the article entitled “Measuring the Application of Quality System Essentials in Vermont Clinical Laboratories” especially in a journal by ASCP.  It was published under the section called “Science”.  And then I saw that it was an evaluation of a survey that was sent out in 2005, near six years ago.  Basically, a survey was sent out to approximately 500 laboratories asking (A) if the respondent was aware if their laboratory had some aspects of Quality in place and (B) if they believed that the measure being assessed was effective and (C ) if they believed the measure contributed to Quality.  Their finding was that supervisors with more than 10 years of experience were more likely say they were aware that there were some Quality Measures in place, and that supervisors were more likely than the non-supervisors to say they believed that having an orientation program was effective and contributed to Quality.  
About half of the supervisors didn’t know if their laboratories developed specifications for selecting purchase or vendors.  

In the academic environment we talk about research being the process of developing “new knowledge” as the path to new concepts, methodologies and understandings.  I know this sounds harsh, but I find it hard to characterize the publication of 6 year old “I think” or “I believe” information that is not verified, or linked to a tangible fact as meeting any of those criteria.  Even if the study was published now so that it could be used as the basis of a repeat study done now to see if there were changes in knowledge and believe, it would be highly suspect, at best.  

The Quality movement in medical laboratories is a rapidly evolving picture.  In the US laboratories are choosing to voluntarily adopt international standards (ISO15189:2008) and are looking at Baldridge as the measure of excellence.  

Information about soft perceptions from 6 years ago does not meet any of the criteria for new knowledge.  For Quality in the medical laboratory to progress we do need new information that contributes to new concepts and better understanding.

Some folks might think that I am being unfair or unduly critical.  But they would be wrong.  To paraphrase a comment that I heard on television a few weeks back, if I agreed with those folks then I would wrong too.