Monday, July 30, 2012

New insights on Quality, People, and Culture


Two new documents came across my desk today; both are worth mentioning.  One, “Organizational Culture, Tenure, and Willingness to Participate in Continuous Improvements in Healthcare” by Marco Lam, York and Dan Robertson is the lead article in the July 2012 edition of the ASQ Quality Management Journal.  You can find it at the ASQ site [www.ASQ.org].  The other is a Final Draft International Standard (FDIS) version of a new ISO document ISO 10018:2012 “Quality management –Guidelines on people and involvement and competence”.    


What ties these two documents together is that both focus in the the importance of people within an organization to make or break an effective Quality environment.  


There is lots of literature that describes what happens when people in the organization are not engaged in developing and embracing a Quality Culture.  They either put barriers in the way, or they interfere with the implementation, or worst of all they ignore the whole process.  All these are tactics that interfere with the Quality process, and they can undermine the work environment and can undermine the company.  


Getting people on board with Quality may be one of the most significant initiatives that an organization can undertake.


FDIS 10018 is ISO Technical Committee 176’s offering of an approach to bring people on board with Quality Management, as it is expressed through ISO9001:2008.  First, to be clear, it is not a “normative:” document.  You see a lot of “can” and “should” but no “shall”.  You can translate this as “we think this is a helpful guide that you may want to consider”. 


It is pretty straightforward document and philosophy; it leans heavily on the notions of open communication, teamwork, leadership, and Quality application, innovation and improvement.  It is well written and provides some helpful thoughts that should resonate in the smaller organization.  


This 10018 document is not so much an implementation guide.  There are better documents to help with Quality implementation; consider for example “Canadian Standards Association ISO9001:2008 Essentials”.  This is more of a fine tuning document that provides some guides to help improve the organization that feels that it is not getting as much out of Quality Management as it could or should.


The article by Lam and Robertson is something very different.  It is an example of survey based research designed to answer some specific questions.  
In this case the questions are ones well worth asking:  

  • If individuals perceive that their organization is committed to continuous improvement (QI), are they more likely to volunteer to participate in QI that do not perceive the organization is the same way?  
  • Are individuals who feel stable and secure (tenure) more likely to volunteer for QI projects that those that feel less secure?
  • How do secure and less secure people engage in quality initiatives if they believe the organization is not really interested in Quality?

Understanding how people in a organization feel and act are very important, and it would be wonderful if we had better tools to work with than opinion surveys, but they did come to a few insights. 


First off, people who feel stable and secure are NOT more likely to be the folks that volunteer for QI projects, but people who thought that being stable and secure with the organization was desirable did volunteer more.  
Second, if people feel the organization is committed to Quality they are more likely to volunteer.  
Third, if people feel the organizations approach to Quality is more a “flavour of the day”, workers may still participate, but NOT  to the same extent.  


On the surface these may appear to be pretty mushy conclusions, but from my perspective they support my own experience.  Don’t look to the “old-timers” to come on board with Quality projects; look rather to the folks that are the “up-and-comers” who are the people who want to be tomorrow’s leaders.  And second, you can’t fool up-and-comers; the more committed you are, they more committed they will be.  Try to fool them with flash, they will see through it.  They will participate, learn the skills they want and need and move on.  


And management is left asking themselves, “what happened to all our aspiring up-and-comers?”  


And that is never a good thing.


PS:  At the moment you can not purchase ISO 10018.  For those interested, contact your National Standards agency (ANSI, SCC, BSI, AFNOR, etc) for more information.

Sunday, July 29, 2012

Quality and Adult Learning


In the past I have talked about the characteristics of adult learners [see http://www.medicallaboratoryquality.com/2011/01/communicating-quality-and-principles-of.html ].  Adult learners know what they want to learn, how they want to learn and when they want to learn, and most importantly how much time effort and energy it takes to acquire the information.  They are demanding with high motivation requirements that are very specific.  


In the arena of continuing education, while the cost of education is an important consideration at the beginning; adult learners make cost a factor in the decision making for registering for a course, but it is secondary to the other factors in the decision to follow through on a course.  If a course is not meeting their specific needs, the decision to walk away is relatively easy to make.  


Perception of overly expensive registration fees may be a barrier to entry, but when it comes to leaving prematurely, requirements of time, effort, and energy will trump financial issues for adult learners.   From my perspective, the notion of TEEM costs comes up in a number of situations.


In my own personal situation, I am taking a volunteer continuing education course in conversational French.  Despite a not-modest registration fee and a good faculty, we had a drop-out rate of 33 percent, once we got to the point where extra time and effort was needed for progress.


And a corollary of the “money is a barrier at entry and effort is the facilitator of early leaving” rule is that the lower the cost, the easier it is to make the decision to walk away.  You know that to be true when you consider all the free continuing education options that are available on-line.  Many people start, few people actually follow through.  


Based on that I suspect that course completion ratio is probably a useful Quality Indicator for assessment of adult education classes, either as a class-room or on-line exercise.   


In that regard, for our course our experience is that our final course completion rate is a little over 95 percent, and I interpret that as an indicator that we have found a good balance between registration and cost and effort and reward.  We are attracting the right people who are motivated to acquire new knowledge in the area of laboratory Quality, and we are charging a fee that those people perceive as reasonable, or at least not overly excessive.  And while the course does require participants to commit a fair block of time, and effort, the vast majority do not perceive it as excessive or unreasonable.  Almost everyone who registers and starts the course completes it.  


When I talk about our “final rate” it means that on a year by year basis we have a drop out rate closer to 12 percent, but we will allow people who have had work or personal issues arise (we have had deaths in the family, child-birth, divorces) to re-register the following year to complete the course, and earn their certification.  Almost everyone offered the option takes advantage of it.  


Part of our success so far I think comes from our applying principles of Quality Management to our course.  We pay close attention to our course objectives, and to our participant satisfaction survey information.  We maintain a pattern of continual improvement based in part on the survey results and on maintaining a grasp of changing available and relevant information on laboratory Quality.  And we go through the process of external evaluation.  So in a very strong sense we try to adhere to the requirements of ISO9001:2008, with the singular exception that we are too small and too focused to impose upon ourselves a formal annual “internal audit” exercise.  


For those interested in learning more about the UBC Certificate Course in Laboratory Quality Management, please follow the link to: http://www.polqm.ca/pdf/Registration/POLQM%20Advert2013.pdf

Thursday, July 26, 2012

There are no MRSA


Several years ago I gave a presentation in which I made the point that “there are no Methicillin Resistant Staphylococcus aureus (MRSA), nor Vancomycin Resistant Enterococci (VRE) nor indeed any of the so called “superbugs” (better known as antibiotic resistant bacteria … UNTIL … a sample has been collected and transported to a laboratory capable to accurately testing for antibiotic susceptibility or resistance.  

I can and do argue that susceptibility testing is one of the most clinically significant procedures done by the microbiology laboratory.  In the absence of a laboratory a clinician can diagnose urine infection, or pneumonia, or abscess formation, or sexually transmitted infection, and can make a reasonably informed decision about treatment.  But definitive decision making about treatment of infection requires accurate information about likelihood of response.  And that means having access to accurate susceptibility testing results.

In most people, antibiotic susceptibility only needs to be “within the ball-park” because even if the bacteria are only partially damaged by a less-than-optimal agent, most healthy person’s neutrophils and antibodies will finish off the rest, and the infection goes away.  But in people with damaged immune response, susceptibility testing has to be far more accurate because successful therapy is far more dependent on drug activity.  

That is a long preamble to get to a simple point; antibiotic susceptibility testing must be monitored by regular and ongoing proficiency testing to ensure that systemic error has not been introduced in a way that could affect testing outcome and people’s lives.  

Fortunately we do that.  


Of all the tests that are performed by North American microbiology laboratories, antibiotic susceptibility testing is the one with the greatest degree of homogeneity.  Laboratories as a group usually do consistently well, and as such the percent achieved graph is much tighter.  Despite this there is a long tail with some laboratories with considerable under performance.  

I think I can understand and interpret this pattern.  Many, perhaps most microbiology laboratories use automated equipment that performs bacterial identification and susceptibility testing.  The susceptibility testing is very simple to perform and interpreted by software algorithms (so called "expert routines").  The amount done by the traditional manual agar diffusion assay is limited.  So the number of opportunities for error are reduced, unless the equipment is not working properly, or a slow-growing or special nutrient requiring bacteria is being tested.  

Consistent with most of our examination of percent achieved scores the tail of reduced performance is with the smaller laboratories (data not shown).  My bias is that the reduced performance is tied to  older equipment or to people not recognizing that unusual organisms may have questionable results and need some form of verification.  I suspect that both of these are directly linked to the problems of reduced supervision, and of reduced opportunities for continuing education and reduced budgets.  This is not a worker problem, but a management and supra-management problem. 

So here is the bottom line.  Susceptibility testing is done very well, but some laboratories continue to have problems.  Accurate information on the amount of antibiotic-resistance activity in a community is directly dependent on the Quality of the laboratory generating the information.  It is inappropriate to tolerate any laboratory getting their proficiency testing right only 70 or 80 percent of the time, or less.  It is a dis-service for clinicians and it puts patients at risk.


PS:
I mentioned that susceptibility is “one of the most clinically significant procedures” performed by microbiology laboratories.  The most significant procedures is the detection and documentation of cluster outbreaks either in the hospital or community setting.  

More on this later.


Monday, July 23, 2012

Laboratory Performance 2012 - a warning


I have been tracking Proficiency Testing performance in laboratories for a long time, but most objectively over the last 10 years.  I presented this information a year ago, and I am presenting it again, updated by another year.  






What the information says is that as the size and complexity of the laboratory gets smaller (category A is the most complex and category C1 is the least) the laboratories performance on PT challenges gets poorer.  This is true year over year and is magnified because the larger laboratories get more samples with more diversity and more complexity and the smaller laboratories get samples that are less diverse and less complex.  


Now I understand that performance on proficiency testing is not the sole, nor necessarily the best indicator for clinical laboratory performance, but I will argue that it is a very good proxy.  While good PT performance does not necessarily ensure good clinical performance, less good performance reflects problems, often of a systematic or structural nature.  If a laboratory does not perform well on samples that closely simulate typical clinical samples (and across Canada, that is the norm) it is a cautionary flag that clinical performance may be suffering as well.  


And that is why having a low percent achieved score on PT challenges is a problem that should be monitored and addressed.


I have argued in the past, over and over and over, that the system is stacked against the smaller laboratory because they have fewer staff, less direct supervision, and less time and resources for continuing education, and little support at key levels.  


But this year starts to see a new trend in a number of provinces where this problem takes on a new urgency.  Health care across Canada is both broken and broke.


As the developed world struggles to find the elusive financial recovery, all jurisdictions are cutting back.  We use all sorts of positive euphemisms such as “doing better with less” and “innovative restructuring” but the public healthcare system is losing staff at every level, and much more emphasis is being put on home care. 


Frankly I don’t personally see that as all problematic.  Hospital care was never that special anyways and we have all become really aware of how much damage we do in hospital care with unwashed hands, deteriorating cleanliness, and less than optimal nutrition.  So if you have someone at home, or can arrange to have someone at home, to provide recuperative and longer term homecare service, that is probably a good thing.  


Back in the eighties (now some twenty-five years ago!) the mantra was “closer-to-home”.  Now it is becoming “in-the-home”.


But as folks go home it means that they live further away from the big facilities with the big laboratories with all their resources.  Folks are at home and much closer to the smaller laboratories.  Smaller laboratories are finding their work load is going up, and their complexity is going up.  Now they are seeing samples from patients that they have not seen for a long time.  But the problem is that if they were under-equipped for small volume low complexity work, they are in a worse position now.  


A colleague of mine has recently taken on the challenge of providing the additional support and supervision for the smaller laboratories in his province.  And that is a good thing.  If that does not become the norm then you can predict that the support system for home care will flounder and that will only make a weak situation much worse. 

So here is the message:  Healthcare collapsing; homecare arising.  Get help for the small “closer-to-home” laboratories or suffer the consequences.  




Addendum:
The full CMPT annual report will be out and available in early October.  You can read last year’s report at www.CMPT.ca





Monday, July 16, 2012

Quality and Research Funding


This will seem like a grouse, and it is, in a way.

Today I got a letter about a grant that I did not receive.  I applied for it over 6 months ago, and indeed had long ago decided that our chances for success were trivial at best.  I was already back on the road looking for other funds.  It was kind of a side-issue decision on my part to even bother applying, since expectations were low.  But for those interested in studies in medicine related Quality, the opportunities for funding are few and far between.  If you are dependent upon grant funding for Quality studies you should probably think about changing your field of interest.  Fortunately I have access to other resources.

This fund had enough money for 8 mini-grants, each a small amount of money, barely enough to support a graduate student for a year.  It was for a grant I planned to use to cover some of the expenses for a new graduate student starting in September.  Unfortunately we were not successful, but then nor were 90 percent of the other applicants.  There were over 80 submissions with enough funding for 8 successful applicants.  That is not a grant submission, it is more like buying a lottery ticket.

In a field where 90 percent plus lose, one can not particularly expect to have any real expectation of success, unless there is an inside track.  Frankly, and this is not intended as a total slur, I doubt that with those kinds of odds, the reality of bias and loss of objectivity among the judges comes into play; perhaps  not overtly, but bias nonetheless.  

The process in many ways is similar to my time on the medical student admissions committee.  Looking at grant applications is like looking at admissions applications.  Basically the best you can do is split the group into 3 subgroups, the definite yeses, the maybes, and the definite nos.  Then the politics would begin with each committee person and subgroup pushing to promote their individual and collective favorites.  By the time the smoked had cleared the winners were defined along with the waiting list group and the rejections.  People would move from subgroup to subgroup until the final decisions were made.  Not uncommonly some of the “best by the numbers-definite yeses” would end up on the waiting list, and sometime would get displaced all the way to the “maybe next year” list.
I’m not saying that is what happened here, but it would be hard to work through a large pile of applications any other way.

To make it clear, I am not particularly upset here; every competition has its winners and losers, and the amount of money at stake here was not very big.  And I will have enough to cover the expenses. even without the grant. 

My thoughts go to a different set of issues.  Medical funding has for the longest period of time fallen along traditional tracks of basic research usually within certain subsets based on traditional subjects including sub-specialties.  There is little room within this funding model to address new advances in innovation in Quality.  While there are smaller pools of privately endowed moneys, this too goes primarily to heart disease or cancer or childhood diseases in recognition of donors’ family members.  For Quality research to grow, the pool of research funding dollars dedicated to Innovation and Quality will have to grow, beyond what exists today.

On the positive side, this current pool of money was chased by a veritable herd of interested folks wanting to be involved in Quality Management focused research.  It is encouraging to know that our little group is getting larger and more active.      

With increasing numbers comes influence and perhaps the ability to develop more financial support.  And if not support, then perhaps the opportunity for a  little inside support.

So congratulations to the winners.  Maybe it will be my turn next.

Saturday, July 14, 2012

Exciting Changes in POLQM



I regularly mention our certificate course in laboratory quality management, not only because I teach it, but because over the years it has been one of the most successful contributors to laboratory Quality Management in Canada and in many places around the world.  Over the years I suspect there is only one other course that has certified more Quality Managers of a general nature, but none more successful than ours which focuses on medical laboratories.  
It is clear that it is very valuable in the minds of the participants that take the course.  [see http://www.medicallaboratoryquality.com/2012/07/polqm-through-mirror-forward.html ]






I think the key to our success is that the core of the course remains essentially the same.  The course remains as a small group virtual classroom course with heavy emphasis on peer and faculty interaction, but at the same time there are substantial changes every year.  We have broadened the scope of the course, added new faculty, added in more challenges in the form of discussion and assignments, and introduced better technology, including video clips and YouTube.  All the change means that the faculty can not address each year as the same-old.  They have to keep it fresh.


If there has been one area that I have taken personal pleasure in, it has been the participation of pathologists and resident students.  While the numbers are still small, it means that the awareness and engagement of top management in the medical laboratory is happening, and the participation of residents in the course means that the concepts of Quality are going to be present going forward. What a change that will be!!


For the January 2013 course we will be extending the course for an additional week to focus some time on Quality tools that go beyond Quality Indicators.  These will be education oriented tools that enhance local training, and help foster a Culture of Quality.  


Last year we were able to strike an arrangement through Standards Council of Canada to provide participants with free access to a collection of 5 international standards that can have significant impact on the Quality processes for their clinical laboratories anywhere in the world.  This year that program will be expanded to up to 10 standards.  That benefit alone is worth approximately $1300.  It is an extremely generous program by Standards Council of Canada.  


We are working with several other standards development bodies to have some similar arrangements.  


Registration starts in September 2012   


We are looking to increase the course size up to 40 participants this year, but will not go any higher because it could impact on the degree of peer and faculty interaction that occurs.  
We regularly make changes but only those that will enhance education but only if they are consistent with continual Quality Improvement.

Tuesday, July 10, 2012

7th Rule for Customer Satisfaction Surveys


A little over a year ago I wrote my rules for satisfaction surveys (see http://www.medicallaboratoryquality.com/2011/06/satisfaction.html)  which basically points to keeping them short, focused, and customer friendly.  In my opinion the rules are a pretty solid foundation for effectively learning information from your clients about the quality of service that you provide.
Since the original writing I have appreciated another truth, that I think is strong enough and valuable enough to become a seventh (7th) survey rule.  

Ask the question that needs to be asked, even if you may not like the answer.  

It’s very easy to create surveys that will always give you positive feedback by simply avoiding any potentially controversial or challenging issues, but how can you study or learn what people think if you don’t open up the discussion.  I will give an example.  

Previously I wrote about our supplemental gram stain program [ see http://www.medicallaboratoryquality.com/2012/06/eqa-and-continuing-education-saving.html ] and that we have recently done a new participant opinion survey .  Well the report is now complete and the results are available.   You can read the report at http://www.cmpt.ca/pdf_other_surveys/2012_Supplementary_Gram_Program.pdf

First I can tell you about the easy to interpret positives. 
When we asked if the survey responders were the people that actually looked at the slides we found that the vast majority did, either as the primary examiner or the reviewer.  That is a good thing because it enhanced the value of their response.

The vast majority thought the slides were of a consistent good quality and looked like and stain like typical clinical samples.  Second one hundred percent of respondents thought the program provided acceptable or better quality educational value.  And third, when thinking about all the slides, and their delivery and the educational value ninety-six percent give us a thumbs-up.  All-in-all I consider that a positive message.

But despite that if we have a single problem it has been working through the technical aspects of laying down good host inflammatory cells into the slides so that they look like typical clinical samples.  That has been a real struggle.   

In our larger Gram stain program we asked participants in 2008 about the cells and 30 percent gave us a thumbs-down as either unacceptable or poor.  So we have spent a lot of research and development time trying to solve the problem, and from our perspective we have moved forward a long way.   
Without getting into detail we have done a lot of work on cellular fixation and storage and their delivery onto the glass slides. From our Quality Control we see less rounding up of cells and greater stability.  We have also learned how to apply either neutrophils or mononuclear cells (aka lymphocytes) so that we can provide slides to simulate both acute and chronic meningitis.  



So when it came time to do this new survey it was clear that we had to ask about the quality of the cellular component.  What would be a great story would be for us to find a clear level of improvement in the participants opinion.  Unfortunately it did not work out that way.  

 While there was improvement, it was neither substantial nor significant.  

I could be satisfied that a clear majority see the cells as OK, but that is not how Plan-Do-Study-Act works.   We have some more thoughts on how to get to the root of the problem.  Maybe it is a damage during transport issue, or maybe it is an interpretation issue.  We have to get the participants slides back and see what they are seeing.  It will take some more planning and some more time.  But at least we are working with information and the knowledge that we are in the right direction.

So the old Clinton model of Don’t Ask and Don’t Tell does not work as a Quality monitoring strategy.  Ask the question that needs to be asked, and continue on. 

Continual improvement.