Friday, February 5, 2016

Voice of the Customer (revisited)



Voice of the Customer (revisited)

I have written on the subject of customer/consumer/complainer voice many times (see for example: http://www.medicallaboratoryquality.com/2012/11/voc-voice-of-complainer.html ).  Asking for, and acting upon input from those that use your product or service is about as Deming as you can get.  Without that input there is no “S” for PDSA.  

Without feedback there is no Crosbian Quality until it is too late.  If you don’t ask, then the only way to discover that you are not meeting customers’ requirements is when they walk away.

So asking is not only important, it is critical, provided that you do it in a way that invites the responses that you want and need (see: http://www.medicallaboratoryquality.com/2011/06/satisfaction.html ).
But for every “ointment”, there is always the probability of the “fly”.  (Said another way, for every silver lining, there is always the dark cloud).   To stretch this analogy sequence one more time, what do you do when you send out your party invitations and still nobody shows?  
 
My point is that writing the perfect survey doesn’t cut it when nobody responds.
So let’s go through the possibilities.

Personal error:  You created the survey but forgot to actually send it out.  Oops – dummy!

Technical error:  You tried to create the survey but copied the link to the survey incorrectly, so that people who tried to respond could not find the survey.  Oops again – dummy dummy!!

Tactical error:  You created the survey and sent it out correctly, and yet many did not receive it.  That actually can be more common than you think.  There are some (many?) employers that do not allow surveylinks  into their email system. 

Selection error:  You picked and focused on the wrong audience.  Folks who are one time or sparsely intermittent users are rarely sufficiently interested in giving an opinion, although that may be a really important group to try and nurture; what is it about what you are trying to do that elicits indifference.  And is there something that you can do to change their attitude and interest?

This is actually a long preamble for me to express my own personal frustration.  I work in a world with a lot of folks interested in Quality.  We have that in common.  I provide a service for which they or their employers pay for them to participate.  That should make many of these folks “motivated customers”.   I know they receive the invitation to participate and I know the link works, but if I get a 50 percent opening up the survey rate, that is an exceedingly rare event.  Indeed it is rare that I ever exceed 33 percent.

Frankly I don’t get it.  We promote the survey only and with the information on why it is important.  The survey takes less than two minutes to complete.  The vast majority of information can be addressed by choice buttons, so that they don’t have to write anything.  There are multiple ways that their anonymity is protected.   Any yet not only do they not respond,  many don’t even open the survey.

Being involved in Quality usually means being interested in expressed opinion – of theirs and others.  In my experience, Quality oriented folks are rarely shy about expressing their mind, and inviting others to do the same.  And yet many, (far too many in my opinion) are comfortable in bypassing an open invitation to be involved. 
But let me be really clear.  Of the folks that do participate, we are really pleased with their opinions.  Most (YAY!! ) and pleased with what we are doing, others maybe not so much (kind of yay).  While we can’t respond directly back to the critical or positive folks (the downside of anonymity) we can be collectively transparent by sharing the results, which we do.  

Sometimes I speculate about sending out a survey to discern the characteristics of survey responders versus survey non-responders, but that would seem to be a hopeless jump down the wrong rabbit hole.  

When there appears to be no solution, does that mean give up and move on?  


Not very likely!!!



Sunday, November 8, 2015

The State of North America Laboratory Quality 2015



The State of North America Laboratory Quality 2015

Earlier this week I was invited to participate in the 2015 edition of Robert Michel’s and Dark Daily’s annual Laboratory Quality Confab held in New Orleans.  I was the third speaker in a symposium on the challenges facing US laboratories.  The lead speaker was Ellen Gabler, a journalist with the Milwaukee Journal Sentinel who had written a number of articles on the questionable quality of what the Commission on Medicaid and Medicare Services (CMS) and the Clinical Laboratory Improvement Act (CLIA) refer to as unwaived tests, i.e. tests that are so easy to perform correctly they require no quality assessment, and on the state of laboratory inspection (accreditation).  [in a nutshell… big glaring problems abound with both].   

The next speaker was Elissa Passiment, the Executive Vice President for the American Society for Clinical Laboratory Science and importantly the head (former) of the Clinical Laboratory Improvement Advisory Committee (CLIAC), the body that reviews activities associated with CLIA and reports to Congress.  Elissa talked about the challenges that CLIA has posed, and continues to pose to US laboratories.  And as she put it CLIA has become a driver to the bottom rather than an inspiration to excellence (Those aren’t quite Elissa’s words, but certainly the sentiment!).

I talked about Quality Partners (a favorite subject of mine) [see: http://www.medicallaboratoryquality.com/2014/12/competition-and-quality-partner-dynamic.html ] , emphasising both their strengths and weaknesses and how laboratories need to progress through the development of Quality Progress Plans, basically a page out of the newly published crown document of organizational quality, ISO9001:2015. 

I thought the symposium went well, although my guess is that it was way over the heads of most of the audience.  While I give them all points for wanting to be at a laboratory quality conference, for 80 percent of the audience this was their first time attending.  More importantly, and without putting too much importance on show of hand displays during conference presentations,  when I asked how many worked in laboratories where they regularly performed internal audits or if they maintained an Opportunities For Improvement list, for both questions I got about a 10-15 percent response.  So their notions about quality are not really yet on the foundations of quality improvement.

Of interest to me, following my presentation, Robert Michel asked me in front of the audience with microphone in hand, how I felt about Canada being a country with provincial control over laboratory quality and would we do better if we had CLIA. 

My response was that it is a tragedy that in Canada our federal government has never engaged in a process, voluntary or mandatory to insist on an across-the-country approach to laboratory Quality, leaving a true hodgepodge mess.  But on the other hand I thank my lucky stars that we do not have to put up with the tired and obsolete mess that the US Congress has made with CLIA.  

 If there is a true and singular tragedy it is that in 1967 the US Congress create a ground breaking approach through the creation of a quality concept, but over time, with the introduction of lobby power over quality power, and political expedience over any interest in patient care, CLIA has fallen from being a force for quality to one of the exact and very opposite.  CLIA today almost guarantees Quality-by-luck rather than Quality-by-design.

Even a quick look at CLIA rules today points to the absence of quality assessment for the vast majority of tests, the absence of any semblance of clinical appropriateness requirements, test acceptability tolerances that one could drive a Mack truck through, and a series of silly regulations about who you can and cannot talk too, or how many times you can or cannot test a proficiency testing sample as if any of that is monitorable or has anything to do with Quality.  What it speaks to is how poor regulations lead to gaming rather than improvement.  

Tired, broken, lack of innovation.  Not at ringing endorsement.

Given the choice, between Canadian hodgepodge and American tired-obsolete, I think if I really needed to have a laboratory test performed today, I would have far greater confidence on the accuracy, quality, and interpretability of the test performed in a Canadian laboratory.

Today we have a new government in Canada with a 4-5 year majority mandate (in other countries we would call that a benevolent dictatorship), and perhaps a new opportunity for a new beginning.  Perhaps this is the right new time to start communicating with my federal government contacts.

Saturday, October 31, 2015

Measuring a Successful Conference



Measuring a Successful Conference



Peter Drucker, the famed author of ideas and books in the arenas of management and leadership is often quoted for “what is measured is improved”.  Clearly a concept that resonates because there exists many variations, including “what you don’t measure, you can’t manage; and what you can’t manage, you cannot improve”.  

For the last 2 and a half days (October 28-30 2015) we have held our 5th Program Office for Laboratory Quality Management Conference.  It is one of our major on-going activities, important to our program mission, important to our role in laboratory leadership.  If there is an activity we need to measure, this would be one.

So how can we measure conference success?

Attendance and revenue are two obvious measures, but each has its own inherent weaknesses.  Satisfaction surveys are also a tool with certain value.  But let me argue some other measurements that we consider.

Total attendance in relationship to expected attendance. 
Our original plan was to reach a total of 100 people, including sponsors, presenters and attenders.  We missed our target attendance by 15 per cent which was a disappointment.  One of our target groups (a local health authority) reduced their participation by 25 people.  We made up our audience with more people from across Canada and international visitors.  So the impact of local authority folks not participating was diminished on our conference.  

I rate our attendance goal as 4 out of 5

Diversity of audience.
By every measure we met our goal of diversity.  We had folks from almost all provinces in Canada and people from Oman, South Africa, India, and the United States.  We had folks from the public sector, and importantly from the community health laboratories.  We had laboratory decision makers, leaders, consultants, and students, and international health experts.  

I am rating our diversity as 5 out of 5

Participation of audience
We can measure participation in two ways; first through active discussion during round-table sessions, and second through participation in the last session in relationship to the first.
At the end of each presentation session there was a round-table where all the presenters and the moderator had an open discussion on the theme and then opened the session to audience participation.  Every open session ran the full length of planned time and every one had to be respectfully stopped to stay of conference schedule.  So that reflects full engagement.

And the attendance of the last session (Friday at 4:00 PM) had 90 percent of the attendance of the first session (Wednesday at 7:00 PM) which suggests that folks did not get bored and drift off.

I am rating Participation as 4.5 out of 5 (0.5 off for the 10% drop)

Compliments/Complaints ratio
Thinking in terms of ambiance, hotel experience, food and entertainment, quality of discussion, and the audience expectations, there are many many opportunities for comment.  During the full session I have received 8 unsolicited compliments and no complaints.  This is separate from the satisfaction survey for which we have not had our responses counted.

I am rating the C/C ratio as 5 out of 5.  

Follow through opportunities.
Since the conference we have had 3 invitations for new shared activities and two new invitations for presentations.  We consider this as a measure of success and interest 

4.5 out of 5.

So overall I am rating our success as 23 out of 25; a rating that I am quite happy with.  We attracted a diverse interested engaged audience that clearly enjoyed the meeting.  We will find out once we get the survey results back if they felt they had learned new knowledge.

 Clearly we have learned that we cannot and should not depend on the local health authority for participation.  For our next session we will focus more aggressive energy in our areas of success abroad.  We have found that we can do that. 
If locals want to participate they will, or not.  Spending a lot of time on encouraging them to participate is not productive.  

I will continue to write on the conference for the next few days.
M