Monday, August 29, 2011

Quality Partners - Gap Analysis versus Gap Repair

The other day I received an email from a friend with whom I have worked in past on implementing quality practices.  He wrote in part:

A lot has changed at “the laboratory” and the staff in the lab need more than just documentation.   All through the mentorships, they seem to have been playing for time and now that I am here with them daily, they have realized that they have to try hard to change their behaviours and work more towards living quality and not just trying to write, read and sign SOPs.  It is amazing, some of the things I have found within one month of stay here. You will not believe that they have been participating in EQAs for the past five years but have never ever reviewed any of their returns for UNACCEPTABLE RESULTS.

In many countries including North America and Europe and Australia and New Zealand laboratories are required to participate in an accreditation program which provides some oversight to ensure that laboratories are doing what they are supposed to do to maintain some level of quality and competence. In other countries, national accreditation is available, but not required.  Laboratories that see a quality or business advantage participate on a voluntary basis.  In other countries there is no national accreditation, but laboratories can seek an international accreditation, from my experience mainly from United States or Australia or South Africa.  Generally the experience seems to be that laboratories with resources or connections seek accreditation, while the others do not. 

The accreditation process is not perfect.  It can generate many unintended consequences, most notably the tendency to brief moments of activity surrounding an impending visit followed by long sleep.  The other is the tendency to game (cheat) at quality control and quality assessment. 

But as the letter above so clearly demonstrates the absence of accreditation and the assumption of voluntary commitment does not work either.  The letter indicates some pretty concerning messages.  The laboratory has demonstrated to itself and to the PT provider that it can perform incompetent testing.  The laboratory has demonstrated the same fact over and over for (according to my colleague) for 5 years.  And it appears that the laboratory does nothing about it. It seems that putting out incorrect results is not a problem.  It seems that failure to support patient care is acceptable practice.  

Somewhere in the education and training and knowledge of the laboratorians at the laboratory there has been a key breakdown.  The point of the exercise is not to look for errors or opportunities to improve.  The point is to do something about it.

To be fair this is not the only laboratory that misses the point and continues with a pattern of failure.  It happens every where.  And you can see how it happens.  The whole chain from top to bottom is broken.  The reality is that in many places NOBODY cares… not the Ministry of Health, not the Hospital Superintendent and not  the Clinical staff and not the Laboratory administration. 

Personally I put the greatest blame on the laboratory directors.  It is probable or at least possible that no one in the ministry or the hospital or the clinicians have any idea about right results and wrong results or about proficiency testing as a quality indicator.  They have other things on their respective plates.  But the directors should at least be sufficiently knowledgeable about laboratory work to know when their laboratory is giving out wrong results.  

Actually in my experience the one group that does do something to address the problem is the group of clinicians.  They simply stop sending specimens.  What is the point.  They know from their own experience to not trust the results anyways.

Finding solutions can be a challenge.  The PT provider most likely is aware of who is failing and who is not.  Start off with an email or a phone call and find out what is happening or perhaps if anything is happening.  And see if some assistance can be provided within the finances and scopes of the organization and the laboratory. 

If there is no interest to work toward solutions, then the only reason that the PT provider can justify continuing to provide the service is strictly for the money. 

And that is immoral. 

At least that is my opinion.  

Thursday, August 25, 2011

Learning the Past or Learning from the Past?

Paul Borawski is the CEO of the American Society for Quality.  He writes a blog “A view from the Q” which can be found through the ASQ website (  In his offering from August 16th, he asked an interesting question.

The philosophy of modern quality reaches back to the late 1930s and 1940s. That’s not so long ago, but it might be ancient history. I’ve been in three large quality gatherings in the past year where the question was asked, “How many of you have heard of W. Edwards Deming?” I was shocked and saddened when less than a third of the hands went up. “How about Joseph M. Juran?” Fewer hands. It occurs to me that something isn’t right about that. Am I being nostalgic, or does the quality community bear some responsibility for making sure its philosophic foundations are not lost to history?

It is an interesting comment that reflects as much on the author as it does on his observations. 

My guess is that Paul and I are about the same age, which is one very vulnerable to nostalgia.  In my day, things were better, or tougher.  My teachers were wiser, or more indifferent.   I had to walk 20 miles to school in the snow without boots carrying my lunch box in one hand and fighting off wolves with the other.   And in my day we know our history and were proud of it.  And don’t let me ever get started on those university students of today.

All good stuff, but it is not only fluff, it is almost always wrong.  We were just as illiterate about our history as kids are today.  Probably worse.  Back in our day all we had was the Encyclopaedia Britannica which was usually at least 10 years old.  And maybe a stack of Life magazines in the garage.  Today the kids have Google and Wikipedia and virtual instant access to the past, or at least the past as we recorded it.  I would venture that the 35 year olders are far more knowledgeable about things that occurred 40 year ago, than were we.  And more importantly, if they want to find out about that stuff they can Google it in a heartbeat.

While some of our generation may know about Frederick Taylor, how many ever knew who Levey and Jennings or Dodge and Romig were, even though some of us still use their charts and tables today. 

There is value in knowing the names within in our historical roots; Shewhart, Deming, Juran, Crosby, but more as an aid to knowing about planning and doing and studying and acting.  If the names get lost in time or get mixed up with certain mythologies that is OK as long as their messages remain.  I know we all want to be remembered, but can we agree that the real issue is appreciating the concepts for monitoring for error, implementing corrective actions, and applying all this to create an continuum of Quality improvement.

By the way, I have been teaching an on-line course in Laboratory Quality Management through the University of British Columbia for 10 years and still include The Deming Management Method by Mary Walton and Quality Without Tears by Phillip Crosby on the required reading list.

Wednesday, August 24, 2011

Quality that drives folks crazy

The other day I was part of a meeting reviewing a document intended to standardize Quality Standards.  The point was that was being put forward was that all Quality standards should use the same words and use them to mean the same thing.  That way regardless which quality standard you use you will immediately understand the document and be able to implement the requirement.  That makes sense.

But as the document continued, the devil in the details started to appear. 

Is the word “document” a verb or a noun, or can it be both?  Is “documented” an acceptable adjective?  The same applies to the word “record”.  And what about the words “goals”, “objectives”, “aim”, and “target”; are they the same of different? Is the term “goals and objectives” meaningful or redundant. 

The terms “continuous” and “continual” are raised as well.  Is the correct term “continuous improvement” or “continual improvement”?  Apparently the word “continuous” means “ongoing without interruption”, while “continual” means “ongoing but at regular intervals”.  Both of these terms need to be differentiated from “continued” because that term means “ongoing but after having stopped”.  And should we refer to “continuous education” or “continual education” or “continuing education”?  Apparently the term “continuing” can mean one or the other.   So perhaps the correct term for improvement is “continuing improvement”.  And where might “continuity” fit in?

In my own technical committee, we have struggled with a truly pointless convention.  When a “sample” is being collected from a patient (or is that a customer or a client) then we can that this is an action (procedure) in the “pre-examination phase”.  And if someone wants to refer to this as collecting a “specimen” as part of the pre-analytic phase” then they are wrong.    This is truly the stuff that some Qualitologists and Standardization people love, but which really drives most people crazy.
This is the stuff that gives quality a bad name as being deeply involved in minutiae rather on issues that are really important to organizations.  And I agree with that too. 

There are lots of drawbacks with this sort of discussion.  First and foremost, there must be hundreds of words that would need to be examined and discussed, and thousands of people who would be affected (should we call them “stakeholders” or “interested parties” or “affected parties”).  It would take years to work through with the likelihood of some sense of “consensus” or “agreement” or “general agreement” being very limited.  And even if we could come to some “common understanding” in English, would the subtleties translate into French or Spanish or Mandarin or Russian?  And third, it drives the "big-picture" advocates crazy, and even more importantly away from trying to improve Quality.

My solution to all this is to reduce the number of jargon words in what we try to convey.  And further to the point, reduce the number of words period.  Pictures are good.  More importantly, we should quit worrying about the trivial issues. 

Quality is about big principles of making sure people understand what they are supposed to do, thinking about a project before jumping in, checking what went right and learning from what went wrong.  Whether you used a red pen, or a blue pen, or white-out is a side issue. 

As this standards year soon comes full circle (World Standards Day is October 14th) we should resolve that next year our goal (objective) for Quality should be to keep it simple and effective.

Monday, August 22, 2011

Quality Management Education

In 1994 I was invited to become the Chair of the Canadian Advisory Committee to the ISO Technical Committee 212 and to become a member of working group that was to design a new quality standard for medical laboratories.  That document became known as ISO 15189:2003 – Medical laboratories - particular requirements for quality and competency.

One of the requirements of 15189 was that since quality initiatives would likely not be effectively incorporated into medical laboratories if we continued with quality being an “off the side of the desk” task of the senior technologist, medical laboratories should identify a person who could commit more time and effort to the process.  That position was called a Quality Manager.  It was a good idea.

In Canada this was going to be a bit of a problem because more laboratories or schools of technology had no idea about what a quality manager was going to do, and what sort of information they would require.  So at the University of British Columbia we decided to take on the task of creating a course.  A few decisions were made.  The program should be available for people who worked in medical laboratories for at least 5 years, regardless of their background.  A prior BSc degree would not a requirement.   The course should be available as an on-line program so that people could take it without having to quit working or travel to Vancouver.  The course should be interactive so that the people taking the course were part of a discussion group rather than being at home alone with a computer.  The course should be useful for working with 15189 but should be focussed solely on 15189.  Quality should be addressed as a broader subject. 

So once we went through the university process, a faculty was developed and the course created, and was first offered for enrolment in 2004.  The course structure is a 20 week on-line course with opportunity for interactive discussion, either by chat or by asynchronous interaction.  The course provides a library of all the required text books, including a variety of recommended and required readings.  The course is objectively monitored and assessed and a certificate is provided for successful participants. 

The course has continued on ever since, with continual review and improvement.  Course reviews are open and transparent and available on-line.

In January 2012 we will be offering the course for its ninth year.  

The on-line start date is Wednesday January 11, 2012.   

The course continues to be driven by the same principles.  This year in addition to addressing ISO15189, it also looks at ISO 9001:2008 and ISO9004:2009.  We look at Quality Partners and the roles that they play.  We also look at Costs of Quality (and poor Quality).  We address Root Cause Analysis, FMEA, Quality Indicators, Implementing Quality, and a variety of other topics.  Most participants find that the course requires a certain commitment of some 6-12 hours a week for reading and discussion.

What started as a course primarily for British Columbian laboratorians has become popular across Canada, and internationally with participants from China, Africa, South America, Central America, Saudi Arabia, and the United States.  It has been taken by technologists, laboratory assistants, pathologists and residents. 

We will start to accept enrolments in September 2011.

For people interested, please visit or contact our coordinator at                   

People who have take  the course are welcome to make comments.

Tuesday, August 16, 2011

Quality means never having to say you are sorry ... TWICE

Jim writes a very contributory comment about Quality.  It is consistent with the Crosby view of Quality (Do it Right the First Time). 

Quality is the doc ordering the right test and the lab drawing the right specimen on the right patient.  The test result is right (correctly reflects what is going on in the patient) and is sent to the right person. And the price/cost is right (we're spending taxpayers money). 

It is completely consistent with the Four Absolutes that the definition of Quality is conformance to requirements, and that the system of Quality is prevention.  The performance standard for Quality is zero defects and the measurement of Quality is the price of non-conformance.

My problem is that this is all about the ideal, and not the real world.  Slips happen and mistakes happen.  They don’t happen all the time or even at regular or predictable intervals. They annoyingly occur like atrial fibrillation, irregularly irregular.
 And as our laboratories have become more consolidated and more complex and with more highly sensitive and intricate equipment, they happen faster and in ways that many never get detected until it is far too late.  Remediating early errors is often an easy fix.  Detecting and fixing downstream errors is ALWAYS a pain.
Every day the nonconformities happen; and they never seem to stop.  Fortunately, the vast majority of nonconformances are minor and don’t affect patient care or management or result in poor outcomes.

That is not to say we should be untroubled about error, but I think that Crosby was excessive in defining Quality by the ideal.  Deming was about 20 years older than Crosby but was still very active in the 70’s which was Crosby’s heydays.  Demining wrote extensively on hazards of slogans and the anxiety they raised. He was very unimpressed by the notion of Doing it Right the First Time.

I think Deming was closer with his sense of Continual Improvement.  It is not only about putting out fires, but catching them early before they get out of control and sorting out why they happened and fixing that.  When I look at reports of patient error, if you knock out the repeat problems the total number of errors drops ... dramatically.  

So I am going to argue the following:
Since errors happen often silently and beyond your control, the effective measure of Quality is the prevention of and the rapid detection of errors in order to avoid or at least reduce customer inconvenience and harm.

Quality is never making the same mistake twice.

Monday, August 15, 2011

Managing Management Review.

CMPT is in it annual report season which means it is time for my doing my annual Management Review.  Over the years I have learned to enjoy the process.  
In many respects it is like eating olives or drinking coffee; the first time you to do one it is not particularly pleasant, but over time they all become an “acquired taste” and they actually become better than “not bad”. 

I recognize that my career path is probably a little different from others because I spend a lot of time in and around some of the intricacies of Quality, and so I regularly look at the management review requirements in a variety of documents including ISO 9001:2008 (Quality Systems), ISO 17025:2005 (Calibration and Testing Laboratories), ISO 15189:2007 (Medical Laboratories) and ISO 17043:2010 (Proficiency Testing Providers).

All of them have a lot in common. 
All of them in one way or another speak to reviewing existing policies and procedures, the results of audits and assessments (both internal and external), customer feedback, comments and complaints, and the processes of finding non-conformities or opportunities for improvement and ensuring that the appropriate corrective  and preventive actions have been undertaken. 

There are a few differences too.  15189 addresses a few additional issues, such as monitoring turnaround time (boo-hiss. Whatawasta time!) and reviewing an evaluation of suppliers (This is a good idea that should exist in all the other review requirements.) and the review of quality indicators for monitoring the laboratory’s contribution to patient care.  I personally think this is a good idea, but an example of really poor writing.  I think this is about readability and relevancy of reports, and maybe about test menus.  But neither of these are issues one would or could follow with an indicator.   

9001 offers some additional elements as well that contribute greatly to quality, including in the laboratory.  And laboratorians would be well advised to supplement their assessment with some or all of these because they help keep the quality system on track and current.   There is an expectation to review the organizational structure.  Laboratorians have learned all too well about the impacts of organizational change, including both the positive and the negative aspects.  You want your workers to follow the policies and procedures set for your laboratory.  They likely will be unaware of organizational changes that have a direct effect on those policies and procedures.  If you don’t make the required adjustments, don’t be too surprised when problems result.  This year in my review there were six policies that needed updating to adjust for changes that have occurred over the last year.

And let me take this one step further.  As the change management folks are fond of saying, the only constant in the medical laboratory is change.  Programs and patient care issues come and go on an irregularly irregular basis.  One day there is a haemodialysis program, the next day it is gone.  One day there is a leukemia ward, and then it is moved to another facility.   A new patient care initiative is dropped in.  Suddenly there is a new and urgent demand for three new tests.  If management does not keep these issues in mind, then meaningful strategic planning is impossible.  At least the annual management review process creates the opportunity to bring these issues to front-of-mind and to make some stabs at learning from what happened last year, and planning for next year. 
So management review is not only a nice thing to do, it provides management with the tools to make good forward thinking plans and decisions.  
Remember when management review in the medical laboratory only meant making sure that the standard operating procedures had a current date and signature. 

Pass the olives please.

Saturday, August 13, 2011


A few months ago I wrote on the topic of assessing customer satisfaction (see Satisfaction – June 11, 2011).  I offered a series of recommendations (henceforth referred to as Noble’s rules) for customer satisfaction surveys. 

They included:
  • Focus them to a single issue
  • Limit the survey to only a few questions, best is to keep it to 5-6 and NEVER more than 10, and make them as uncomplicated as possible .
  • Pre-test the questions to reduce (you can never avoid) ambiguity.
  • Make sure that it can always be completed in 3 minutes or less.
  • Never require an answer. That is a guaranteed invitation to bogus information.
  • Decide in advance which slice of your audience you are interested in and then only focus your energy on that group. General send-outs are a total waste of time. 

Somebody must be listening.  Yesterday I received an automated telephone survey from a company that I dealt with the week previous.  The survey was done over my cell phone.  They told me that there would be one question only.  Total effort on my part was about 20 seconds of listening to the preamble and question, and 3 keystrokes; one to agree to the survey, one to answer the question and one to confirm.  Talk about efficient and effective.

I raise this because it was in stark contrast to another one received by email on the same day.  They suggested it was going to take about 15 minutes but that my name was going to be included in a draw for one of 10 iPads.  At about 3 minutes I bailed out because the questions were getting progressively more complex and convoluted.  I decided that even if my chances on the iPad were near perfect, it still wasn’t going to be worth my while.

Recently I visited a laboratory that showed me a survey they had put together to comply with their accreditation requirements.  It was an on-paper document that had been mailed out to about 500 customers.  I had 20 questions and responses were all text.  By the time you add in the total costs for time to create, paper costs, postal costs for both send-out and return, plus the time and effort to transcribe data to the computer and then to try to analyze text, and then the creation of the report. 

The time from creation of the questionnaire to the completion of the report was barely less than 6 months and the total cost must have run well over $30K.  When you add in costs of benefits it could easily be $50K.
When I asked what actions resulted, the answer was “nothing yet”.

So what was accomplished?  Someone had spent nearly 6 months on a project that would result in a checkmark on an accreditation review, but would likely have any short or intermediate or long term effect.  Perhaps they might make a presentation on the results, but it certainly would not be a publishable.
It would NEVER be repeated to gather longitudinal information.

There are a number of reasons why this is problematic.  The project represented a really poor use of personnel time, effort, energy, and a total waste of public sector money.  The information received was in all likelihood unreliable.  There was no output (other than the report) and no actions taken.

Is it any wonder that Quality sometimes gets swacked with a reputation of being of dubious value!  Improving Quality in the medical laboratory can improve patient safety and decrease time and costs, provided that some thought goes into the process. 

Message for Quality and all its components:
  • “Everything should be made as simple as possible, but no simpler" – Albert Einstein
  • "Simplicity is the ultimate sophistication" – Leonardo Da Vinci
  • “Keep it simple, stupid” - ?

Thursday, August 11, 2011

Better Reporting for Microbiology Samples

In clinical microbiology we have the same three-phase cycle as everyone else: the pre-examination phase, the examination phase and the post-examination phase. 
Microbiology samples have some unique characteristics that directly impact upon how we have to act in all three phases.  Because many (hopefully not all!) samples contain contamination from surface flora, they have to have a very brief interval before being processed in the laboratory.  Otherwise these contaminants overgrow and confuse the clinical interpretation of results.  At the other end of the cycle, because our results tend to be wordy and qualitative, often they need to be associated with interpretive or explanatory or cautionary text.  And that is sometimes a problem.  In 1996 we published an article about reporting variation.  Laboratories found 22 different ways to report the same quantity of bacteria in a urine sample.  You can call that a lack of standardization in reporting structure.

In a recent customer survey, CMPT asked the information that we provide around our samples.  Overall we found the responses quite elucidative and helpful.  Some examples: 

CMPT expectations for sample reporting, do not always agree with the policy and procedures of other institutions. It is sometimes difficult in knowing who is the best practice source.


The critiques should stick to the facts. There are many gray areas in Microbiology; many split opinions with no right or wrong answer.. The critiques often have the habit of stating these “gray” areas as clear cut, with the author’s opinion being presented as the only correct interpretation. It does not broaden anyone’s knowledge, by failing to represent the true dilemmas we face with cultures, and that there are no right or wrong answers in some situations.

I highlight these two results to make a few points.  First, I understand the feeling of challenge about sorting out who is and who is not a practical source of information.  I also disagree that with some consideration that the answer is difficult.  Second, I agree that Microbiology opinion can suffer because of the “gray” areas. 

In the olden days (like in the ‘50s and ‘60s and ‘70s) laboratorians were very comfortable with these gray areas because it meant that any answer was an OK answer.  Laboratorians were free to report results any way that they saw fit.  As I was told by a group of colleagues, "No one can infringe on my right to practice medicine the way I see fit".  

The problem with that approach was that it was predicated on the notion that laboratories exist for the benefit of the laboratory and not for the clinician and not for the patient.  Today the rules have changed and we understand that laboratory reports exist to provide information for the users.  When patients are tested in multiple laboratories there is some obligation to at least ensure that the information that they receive has some level of consistency.   Using the notion of  “gray areas” as a justification for inconsistent  reporting structure is not appropriate;  something more definitive is expected. 

In the absence of legislation and regulation or some other legally binding notion, at least we should make our decisions based on consensus within our community.  It is called standardization.

And that is where PT/EQA committees can and do play an important role. 

Take CMPT for example.  Our committee is a group of 13 medical microbiologists, scientists, technologists, with both clinical and laboratory expertise.  The group comes from across Canada and representing university hospitals, community hospitals and community laboratories.  The group collectively perceives the wording of reports as important and spends considerable time coming to conclusions about mutually agreeable decisions about meaningful clinical reports.     

If ever there is a group ideally designed to come to consensus opinion about the optimal way to report clinical samples, it would be a group just like this.  CMPT is not the only PT/EQA program in Canada, nor is it the only one that has a committee that assesses reports and determines optimal reporting patterns.

So with respect to the two comments mentioned above, I have to respectfully disagree.  There are better reports and poorer reports and there are community based PT/EQA committees well positioned to help define the better way to report.

Don’t agree?  That’s OK.  Submit an appeal with justifications.  The committee also  has the obligation to provide a second look.

But we will continue in our obligation to provide what we see is an important contribution to reporting standardization. 

It is one of the things that we do.

Tuesday, August 9, 2011

Tangible Quality in Action

Manky Badger asks for some tangibles that will illustrate the notion of “Quality” in action. 
These tangibles are derived from knowledge, study and experience dating back through the past 70-80 years and adopted by many business sectors.  They exist today in a variety of sources, most commonly from the International Organization for Standardization. 
Whether a laboratory adopts them because they have to (mandatory accreditation or regulation) or because they see opportunity and advantage (voluntary adoption or voluntary accreditation) depends on the situation and circumstance.
Some laboratories do both.

Many organizations in manufacturing and service sectors have demonstrated the tangible advantages to implementing Quality practices. 
Most locales that have created requirements for mandatory accreditation adapt these same tenets into their own language, regulation or legislation

An active Laboratory Quality program includes:

Management should manage:
Laboratory Management should show leadership in the laboratory by establishing policies that are important to laboratory testing, and then ensure that everyone in the laboratory knows what they are and why they are set as policy. 
Laboratory Management should then ensure that laboratory decisions are consistent with those policies.

Laboratory suitability:
Laboratories should be fit for working in.  They should be safe and secure and have enough resources to perform the tests that they laboratory is required to perform.

In-laboratory communications:
To reduce confusion that leads to error testing, ensure that your laboratory staff know to whom they report and who reports to them. 
Ensure that laboratory users know who is responsible for laboratory decisions.

Laboratory documents should be written in a way that can be understood. 
When there are documents that are written as different versions, people working in the laboratory  should be able to know which is the most current and active version.
When documents are stored, they should be stored safely and in an appropriate way so that they can be retrieved when needed.

Training and competency of laboratory staff:
Employees should know what tasks they have been hired to fulfill.  They should be trained to perform those tasks and their training confirmed.  They should be checked on some sort of regular basis to ensure they have continued competency doing those tasks, especially if they have been away, or ill, or the task has changed over time.
Laboratory personnel should have access to the information and resources they need to perform their tests and tasks both efficiently and effectively.  This means there should be some form of written instructions.
Equipment, materials and reagents should be  as well as equipment and reagents should be in a condition that assures they are working properly.   
Employees should have regular access to continuing education to ensure that their knowledge and skills meet current needs. 
Employees should be aware that working in a medical laboratory creates obligations with respect to error prevention, timely testing, error reporting, and patient confidentiality. 

Error Prevention:
Laboratories have an obligation to reduce the risk of error through the programs mentioned above and also monitoring for systemic error through active programs of quality control, quality assessment, and quality indicators.  Signals of problems are investigated to determine if errors have potentially been caused, and if so ensure they are addressed.

Pre-examination error:
The laboratory should have policies and procedures that prevent poor quality samples (wrong patient, wrong test request, wrong conditions, wrong timing) from being tested in order to reduce the risk of producing results that are clinically misleading.

Post-examination error:
The laboratory should ensure that the right person gets the right report on the right sample from the right patient in a clinically relevant time.

Addressing error:
Laboratories are complex environments and to some extent some error is inevitable.  That being said, laboratorians have an obligation to monitor for error, to detected it early, to remediate it rapidly, to investigation why the error occurred and to the extent possible correct the possible causes, reducing the risk of similar errors. 

Customer satisfaction:
Laboratorians should accept all complaints as cause for investigation and action.  Where problems have lead to the complaints these should be addressed and corrected.

If a laboratory is able to say with confidence that they are working within a system that enacts these tenets, then they are doing the things that will reduce the risk of producing error, increase the possibility of detecting error, and when error is found fix it and learn from it, thereby reducing the risk of repeating the same problem again

If they are in place, we call that having a Quality program. 

To get more tangible, Manky Badger may want to consider a course such as the UBC Certificate Course in Laboratory Quality Management (

Sunday, August 7, 2011

Embracing Quality

I was putting in time sitting on the airplane, reading their airline magazine and came across an interesting note ascribed to an hotelier Jaume Tapies.  People only embrace what they help to create.  It’s important to share projects with those who are invested in them.   This caught my eye for a number of reasons in part because I was just returning from a successful quality assessment project and second because I know that successful hoteliers understand the importance of customer satisfaction as part of the quality process.

The notion of people embracing what they create is a really important concept as we build laboratory quality systems.  Many laboratories create quality teams who create documents and quality system trees either on-paper or on-line that are consistent with one organization or another, without ever engaging the people who are going to be expected to use the system.  Then they spend their time “communicating the system” by talking at the staff or worse emailing the staff about how user friendly the system is.  “This is how to read our new SOP.  I know it seems long and looks complicated, but it has everything you will ever need.  Trust me.  Trust me.”

This is rarely a path to embracing and engaging in the quality system.

How often do you hear comments in your organization like “Oh that’s quality.  Talk to Pat.  She’s the quality person”, or “Talk to Shirley.  I know where the Quality stuff is, but I never actually use it”.  Even worse, “this stuff is a joke.  I do it because I have to, but what a waste .  I could be doing real things, important things”.

Laboratory quality teams need to keep a few concepts in mind.  Quality is based on a very few guiding principles.  How they get manifested is up to the organization.  The quality system does not exist to meet the expectations of qualitologists or ISO or accreditation bodies; it exists to help the staff prevent errors from occurring and when they do, to catch them early, and fix them and learn from them and move on.  Quality systems that capture the guiding principles and at the same time fit the needs of the staff are good systems; and more to the point, quality systems that don’t fit the needs of the people who are supposed to use it are bad systems. 

There is another concept to keep in mind.  Embracing quality is different than being engaged in quality.  Being engaged is doing the steps you have to do.  Embracing is understanding the concepts behind doing the steps you have to do.  For lots of situations if you can get folks engaged that is pretty good.  There are all sorts of ways to get staff engaged.  You can give them pizza prizes, you can give them little gold stars.  You can remind them about filling in the forms when they forget.
Getting folks engaged can be a lot of work, but will get things done. 
Getting folks engaged is a success.

But getting the message shared and having folks understand why it’s important and how it makes their lives easier and improves the outcome of their effort is always better.  Getting folks to embrace quality is an accomplishment.

So I am interested in receiving comments from folks who have ideas about the things that we can do to get laboratory staff to embrace quality.  Twitter length (140 characters) is good but not essential.

Wednesday, August 3, 2011

External - Internal Audits (EIAs)

Quality Management principles and ISO Quality standards all talk about the critical importance of Internal Audit. If you don’t go through the active process of seeking information, then it is impossible to make informed decisions on progress and strategic planning. In the absence of internal audits one cannot use the term “continual improvement” because the word itself implies that you are actually aware of areas that need improvement.

I think that it is of value to talk about two different types of internal audits: one being where you do the assessments in your own organization (let’s call that a true internal audit) and the other is where you hire a consultant or colleague to come and do it for you. I think the term External-Internal Audit has value to describe this latter situation. Both approaches have value and both can provide useful information, but the latter (an EIA) may have greater cost but is more likely to raise critical information based on the knowledge and expertise of the person brought in to do the assessment.

I have mentioned in the past my strong preference on voluntary quality and EIA are another manifestation of that voluntary quality. The information may be done by an outside person, but the information is intended for internal consumption only.

I raise this because I am in the middle of an EIA in a mature and academically oriented laboratory in Canada. I am looking at the structure and function of the laboratory's quality system. It is a rather global review and we are taking our time to do it right so that they will end up with credible and useful information.

A question arose during the post-assessment discussion today about what is intended by the term “top management”. There was general understanding that TM means the folks who are in a position to make decisions for change based on their ability to make decisions on resources (also known as “the pursestrings”). I don’t think that is a correct understanding and in the public sector would almost assuredly be a terrible definition to envision or try to put into practice. The public sector by design is multi-layered with fiscal decisions being made at or very near the top. Almost assuredly the folks at that level are so removed from clinical or laboratory reality which again is by intent to reduce the risk of bias playing a role in fiscal decisions.

A more functional definition of TM (can we call them the function top management or FTM?) is that small group of people who are close enough to have an understanding of the organization’s mission and have the responsibility and authority to make decisions and change taking in mind all the resources available which may or may not include money. This is a really important group that when working well because they are high enough up on the food chain that they can think about the global organization, but are close enough that they can effect change at the local unit. It is a really good example of “Thinking Globally but Acting Locally”.)

Waiting for money to affect change is never a good idea. Positive assertive actions will begat money with a higher and sooner level of success than money will begat positive actions.

Updated for grammar September 2013