Friday, August 23, 2013

Running a MOOC - running a muck?



Running a MOOC - running a muck?

I was flipping through a few television channels this morning and happened upon a presentation being given by President Obama to a big crowd. He was talking about college education and how much better it will be when it is when experienced on-line.  I later found out he was talking at the University of Buffalo and talking about his new plan to reduce the costs of colleges in the United States.  I was intrigued since I thought that education in the US, as in Canada, is a States (Provincial) issue, and not really under his domain.  But I will certainly acknowledge that I am not an expert in the US regulatory and legislative pathways.

Afterwards I had the chance to read more about the presentation.  It seems there are a number of components to his new plan.  One is a ranking system which “measures” items like “tuition, graduation rates, debt and earnings of graduates, and the percentage of lower-income students who attend” which to me sounds like the same thing that Maclean’s magazine has done every year in Canada for the 20 (?) years and in the US by American magazines including US News & World Report, and Forbes.  

But the part of the discussion that really caught my attention was the discussion on on-line education.  The President was talking about MOOCs (Massive Open Online Courses) as the wave of the future.  

Maybe, but I suspect he is on the wrong track.  I agree that the future of more on-line courses is too compelling for near all university courses with the exception of courses where a laboratory or similar experience is required. 

MOOCs, as I understand it, are basically “free” online courses which are composed of a lot of content including text, video, graphics, and voice but little direction or support.  Count me as suspicious.  

Courses that are “free” usually cost a fortune in grants to build and promote, so while they may be “free” to the student, they still cost a bomb to create.  Granting agencies will likely get really bored in a hurry at funding MOOC start-ups with little opportunity to recoup or develop maintenance funding.    And “massive” courses usually means they are pitched to the lowest common denominator, and are provided without any guidance or support.  I suspect that MOOCs will have huge enrollment and minimal follow through.  Interesting, but while time will tell, I suspect that MOOCs will have all the longevity of Myspace.

I don’t run a MOOC (or is that a muck?), and to be technically correct I don’t run a SOOC (small open online course, if there is such a thing) either. To stretch the jargon a bit, I would call our course a STOC (Small Tuitioned Online Course) because in my opinion the educational arena of Quality Management is not a subject that lends itself well to being either Massive or Open.  

Over the last 11-12 years of our course we have learned some valuable insights in adult education for Quality.

1: People don’t learn Quality through reading about Quality.
2: People learn Quality through discussion and debate with like-minded peers.
3: Education in Quality is benefited by the active participation and guidance of experienced knowledgeable faculty. 
4: People who take on courses on their own  without a support structure or some sort of expectation rarely continue courses to their completion.  

Our course combines all the discipline of a real-time in-class university course supplemented with the benefits that on-line can offer.  

On the conventional side we accept applications, but screen for appropriateness.  People have to have at least 5 years of laboratory experience before being taken into the course.  We charge a tuition which includes provision of all books designated as required reading.  Each week has a beginning day on which we start new subject discussions, and an end-day in which all assignments have to be completed and submitted.  And all aspects of participation including contact and contribution to discussion and assignments and quizzes and examinations are evaluated.  

But superimposed on this we provide the benefits of on-line education.  People can participate when they want to, irrespective of time zone, or rush hour or family or job expectations.  Participants don’t have to be in one geographic place, don’t have to worry about parking or coffee restrictions or sharing the classroom with people with colds or influenza or chickenpox.  When you travel, if you have access to the internet you can stay in contact with the course, even in countries with limited internet access.  And perhaps the most important benefit is the opportunity to connect and network with like-interested folks around the world.

We are very comfortable with our small group but would not be unhappy if it increased by a few more.  We tend to have a cut-off in mind to ensure that the number of participants does not override their opportunities to participate fully. 
So we will never be a massive MOOC.  We will leave that to folks interested in learning first year physics or English literature.  But our graduates with certificates will be the best medical laboratory Quality Managers trained anywhere in the world.  


If you want more information on our UBC Certificate Course in Laboratory Quality Management, please contact us through www.POLQM.ca

Sunday, August 18, 2013

Proficiency Testing Does Improve Quality



A good friend of mine once made the comment that you don’t get rich running proficiency testing programs.  He was correct.  There have to be other reasons to  get programs started and continued.  For most organizations, the prime motivation is directly associated with a belief and commitment that laboratory testing can and must be the best that it can be for the benefit of patients, clinicians, and laboratorians.  It is for a Commitment to Quality.

I suspect that most laboratorians understand why they are required to participate in proficiency testing.  That does not make it any easier to accept.  In over 30 years I don’t recall many technologists or microbiologists telling me that they look forward to receiving and testing our challenge samples.  But the important thing is that they do it anyways.

Recently we sent out a survey to our participants asking questions about the relationship between proficiency testing and quality management.  One of the questions stated “Laboratories Quality Systems recommend using laboratory testing errors as the foundation for investigating for systemic errors that can impact on broad aspects of their testing routine. Is it your experience that investigating CMPT proficiency testing errors has led to the detection and awareness of systemic errors that affect both proficiency testing results and also clinical testing results?

As an aside, I will say that this is a good but not perfect question.  While it is focused and specific, the preamble is too long.   

We provided a scale of 6 different answers that varied by degree.  (See the enclosed figure).

To increase participation and reliability we follow our rules that all surveys are anonymous and optional, with no required responses.  And generally we accept the responses in the first 2 weeks.  If we get a response rate of 20 percent or greater we don’t bother with additional prompts.  



The responses to the question were both enlightening and encouraging.  More than seventy-five percent of laboratories reported back to us that when they get proficiency testing challenge errors they find that the investigation of that PT error leads to finding and amending system problems.  Usually these were minor issues that were addressed with procedure tweaks, but sometimes the issues were significant.  Only about 13 percent said that they found no reason to follow up a PT with a peak at their system to ensure that it was working properly. 
The balance (who responded with “other and a comment”) were all laboratories that had made some form of system follow-up. 

When “other and comment” were combined with the direct responders, almost 90 percent of laboratories reported that having a PT error lead to some form of larger investigation which lead to some level of system improvement. 

That is a very good finding, and shows that the crafters of laboratory Quality were right when they required laboratories to participate in proficiency testing because error improvement contributes to better patient and customer care.

Many laboratories continue to overwork PT samples, because they view PT as a test rather than as a Quality measure.  But overworking samples can hide or minimize system errors.  Some of those that noted changes that needed small tweaks only may have missed opportunities to discover and correct larger issues.

It appears there are a small number of laboratories that may need some help in seeing the value of error detection and aware through PT.  While errors can be PT specific, it is more likely, if the laboratories used their routine procedures that PT errors may have their foundation in the laboratory’s normal structure and function, and that taking a look may be worthwhile.  These are fully lost opportunities.
Proficiency Testing is too valuable.  Rare is the event in live clinical testing that a laboratory gets near immediate feedback when they make a mistake, except perhaps in the domain of sexually transmitted infections.  The unfortunate reality is that in the vast majority of time the laboratory has no mechanism of capturing external failure errors unless there is a penalty involved.  Think about the yet again recent mess in Nova Scotia [see: http://www.medicallaboratoryquality.com/2013/08/transcription-errors-can-maim-and-kill.html ]

So to me there is a bottom line.  This survey provides supportive evidence that participation in proficiency testing aids Laboratory Quality Management.
That is good for laboratories and their clients and is the best reason for us to continue providing medical laboratory proficiency testing programs.

PS: Hope to see you in Vancouver in October.  http://polqm.ca/conference_2013/conference_2013/conference_home.html


Tuesday, August 13, 2013

Transcription errors can maim and kill



I have said on a number of occasions that we run a microbiology proficiency testing program that is predicated on the use of simulation as a learning and challenging process.  If our samples look and act like real samples, then laboratories can use them in a variety of ways to improve process.  A recent survey indicates that we get a lot of support from our participants because of our simulation potential (more on this later).  

But there is one situation in which we do not get a lot of support; indeed some folks get really angry.  We have a committee philosophy and policy that says that our samples have identification with two identifiers and we consider that as the sample’s name.  If a sample is not designated by its proper name, regardless of the work performed, the sample fails.   This genuinely upsets some of the laboratories because they see this as unfair and unreasonable.  

With respect, I disagree.  We refer to improper naming of samples or indeed any incorrect submission of  forms as a post-examination error.  And we take a very aggressive attitude towards post-examination error.

Jump to yesterday’s National Post, one of Canada’s most prestigious national newspapers.  With disappointment we read the story of 4 women who were severely harmed and indeed maimed by the healthcare industry because someone put the wrong breast biopsy report on the wrong patient’s chart, resulting in the wrong person getting the wrong surgery.  In another mix-up the wrong patient ended up with a diagnostic biopsy and the other patient received delayed care because two samples that came to the laboratory got mixed up in accessioning.

One politician’s response, “We are sooo sorry“, and another was “Well, healthcare is run by people and sometimes people make mistakes”.  Another response, “Well, we were planning to put in a bar-code system that will reduce the chance of this happening again”.

Look, I have read James Reason and his books on risk and error, and I get it, sometimes people screw up;  sometimes we call them slips, sometimes we can them distractions, and sometimes we call them mistakes.  Most of the time, they are invisible or they cause at most some inconvenience.  But sometimes they don’t.  Sometime, especially in healthcare, they can hurt people.  In some industries fail-safe check systems are introduced to prevent them from happening at critical times.  In some industries they talk about fail-safe, and in others they don’t even bother talking about them.  

In the past I have talked about the casualness that exists in healthcare when it comes to post-examination error, in particular when it affects confidentiality.  [see: http://www.medicallaboratoryquality.com/2013/08/confidentiality-and-laboratory-error.html ]  If we just accept slips to occur without acknowledging their consequences, then we allow folks to not worry about them.  And that can lead to really bad outcomes.

And at that point, the problem is no longer slips and inattention, it is failure to develop policies and processes to protect patients.

So CMPT will continue to consider transcription errors as part of the proficiency testing exercise and will continue to view them as Major Errors.  

And so should you.

PS:
Our POLQM Quality Conference is coming along really well.  Hope to see you in Vancouver.  If you come, let me know that you are a sometimes reader of MMLQR.

Sunday, August 11, 2013

Competence Assessment: how regular is regular?




By now most readers of MMLQR know that I buy into the concepts of Quality and that I follow Quality Management practices in my laboratory and academic practices.  But I also see it as my obligation to understand and interpret Quality and apply it to the extent possible and to the extent reasonable.  To my way of thinking, embracing the limits of possible and reasonable creates an atmosphere of achievability.  Going beyond that is like starting on an exercise program by running a marathon every day.  You can do that once, and maybe even do that twice, but at a certain point it ceases to be sustainable and falling off track becomes unavoidable.  The challenge is to find the level that works for you and your organization from which you can build going forward.
It is with that in mind that I reference ISO 15189:2012 on the topic of competence.  Here is what the document says:
 Competence assessment.  Following appropriate training, the laboratory shall assess the competence of each person to perform assigned managerial or technical tasks according to established criteria.  Reassessment shall take place at regular intervals. Retraining shall occur when necessary.”
This is not a bad idea.  One needs to be confident that people that are working in your laboratory know what they are doing.  And the concept of reassessment at “regular” intervals as opposed to “annual” intervals makes competency assessment more likely to be a program that can be sustained.

Recently I did a routine on-line survey of the laboratories that participate in our Clinical Microbiology Proficiency Testing program, which is a pan-Canadian proficiency testing program that is committed to clinically relevant proficiency testing. 
One of the questions I asked was about competency testing because I wanted to know if laboratories used CMPT samples as part of their competency assessment process.

The first question that I asked was “Does your laboratory perform competency assessments for laboratory personnel?”  to which I gave 5 choices for response.  You can see the results in Figure 1.



I will tell you that in my opinion, while most responders (67%) said they did so on a regular basis, I personally was hoping that more would have selected the second choice (new trainees, new hires, and return from absentee).  Surprising to me, not a single responder selected this choice.  While it is true that choice 1 was much closer to the letter of the regulation, the second choice is much more practical and pragmatic; in my opinion, it is still an acceptable option.

In my laboratory all my staff have been with me for a long time, well in excess of 12 years.  In Canada that is pretty typical of most laboratories.  All staff were trained well at the beginning and have grown into the positions; they are experts in what they do.  By all the measures that we follow (contamination rates, late rates, complaints and complements, sustained contracts versus lost contracts) they do very well. We have had people take prolonged maternity leave (in Canada employees can take up to one year off for parental leave), after which they underwent retraining and reassessment. 

In CMPT I do not bother with routine or “regular” competency assessment any more, except for special cases.  Rather I focus on output and performance.

In my world, it is true that some people can have life challenges, with drugs (including alcohol), or illness, or stress/anxiety all of which can impact on performance.  For some, the onset may be insidious, and may impact on work may be gradual or subtle.  But the reality is that most people don’t go through crisis and even when they do, competency assessment is far to blunt an instrument to rely upon for picking up subtleties.  In other words, in my world, once we have gone past the point of recent hires and retrains and those who take extended leave, having active competency assessment, even on a “regular” basis can be excessive.

All activities are Time, Effort, Energy, and Money (TEEM) consumers, all of which are finite in the medical laboratory.  Running competency program for all personnel takes a lot of time and effort and energy.  And I find that that those of assets that I can better consume in other meaningful Quality oriented activity.  Having an intact program is important, but so is picking your battles, and maintaining your options.  It is as much about balance as it is about requirements. 

So when the discussion comes with the accreditation auditor, I suggest that laboratories understand that the crafters of the standard signalled the need for flexibility in Competency Assessment by using the term “regular”, and that it is your prerogative to explain and justify how you have interpreted and used that flexibility to your laboratory’s advantage. 

It is up to you to sort out how you measure performance and how  regular  “regular” has to be.


PS: For more discussion on Competency Assessment, consider attending the POLQM Quality Management Conference for Medical Laboratories.