Featured Post

Healthcare Customer Satisfaction: More Talk AND More Action

Healthcare Customer Satisfaction: More Talk AND More Action Customer satisfaction (Voice of the customer) is a recurrent th...

Friday, January 24, 2020

Republishing Noble's Rules for better satisfaction surveys

About a decade ago (!)  I started to think about all the problems I was having trying to gather useful information on satisfaction surveys for my proficiency testing program and from my students.  Most of the results that I got back were either incomplete, or inconsistent.   Trying to look at the data was both an Advil moment and a waste of time.  
So I started experimenting with different approaches to see if I could generate better surveys and get better information.
In 2011 I wrote a block on getting better surveys through applying better rules.
Over the years I have revised them, but through the process I ended up corrupting the blog post.  

So for anyone interested, I am republishing the last version, put together in 2018


In the laboratory business we have always thought it was all about the science and not about the business.
But we were wrong.
ISO as well as WHO and CLSI (and before them, Deming and Crosby) all acknowledge the importance of “Customer satisfaction”.
It is not so much that the customer is always right, but that the customer should always have a voice and should be heard. There is an expectation to have some form of customer input on a regular basis, perhaps as often as once a year.
The reason that the standards development bodies have included this as a requirement and the basis for policy is because it doesn’t matter if you are an academic providing a course, or a laboratory providing documented information, or a manufacturer providing umbrellas, or a proficiency testing provider, or an equipment and reagents supplier, if your customers are not happy, then bad things start to happen.
In the private product or service sector that probably means customers stop coming. And that becomes the business killer.
 
In the public sector laboratory, the customer may not have a choice of which laboratory they have to use, but that won’t stop complaints, reputation slurs, increased threat of litigation. (Incidentally, this applies to accreditation bodies as well.)
Sooner or later you risk becoming the interest of the public and the media.  

Or even worse, think about the embarrassment and humiliation of a public inquiry.
All of those are major career killers.

So what to do. In the business world, the godsend solution for customer satisfaction has become the on-line survey. It is so easy to create an on-line survey and send it out to all your important customers. So easy, in fact, that it has become too easy. 

Anyone foolish enough to give your email address to a hotel or car-rental or restaurant gets inundated with surveys. We have become a world of survey send-outers and survey send-inners, and most of it is a waste of time.

Most surveys are poorly designed; are way too long, too complex, and far too diffusely focused. If a survey takes more than 2 -3 minutes to complete, you can guarantee that either it will not be completed, or will be completed with junk information. 


Also, you have to remember that responders  always have their own bias one way or another,and probably have interpreted the questions in ways that you never dreamed of. Creating most surveys has become high risk of being counter productive for addressing customer satisfaction. As they say “Fast, easy, slick and wrong”.

If you still feel compelled to resort to surveys, spent some time at setting them up so that you might get some information that you can consider. (We call that PDSA) . 


After years of learning the hard way, I figured out a set of simple rules  that anyone interested in developing a Satisfaction Survey can follow.  I arrogantly coined them as Noble's Rules for Successful Satisfaction Surveys Note:  They don't guarantee success, but not keeping them in mind will pretty much guarantee failure.


(1) Focus them to a single issue.
The more you try to pack into a survey, the worse it gets.  Pick a topic and get out.


(2) Ask the question that needs to be asked, even if you may not like the answer.  
It’s very easy to create surveys that will always give you positive feedback by simply avoiding any potentially controversial or challenging issues, but how can you study or learn what people think if you don’t open up the discussion.


(3) limit the survey to only a few questions , best is to keep it to 5-6 and NEVER more than 10, and make them as uncomplicated as possible . 
Get in, ask a few questions, and get out.  Don't give them a chance to get bored.

(4) make sure that it can always be completed in 3 minutes or less. Boredom is a guarantee for incomplete surveys loaded with random nonsense answers.  It would be better if they didn't send the response in, because the nonsense becomes pollution and the pollution leads to terrible interpretation. 

(5) Pre-test the questions to reduce (you can never avoid) ambiguity. 
Make your questions VERY simple.  Confusing questions get confusing answers.


(6) Avoid requiring an answer. That is the other  guaranteed invitation to bogus information. 
Making people answer questions, makes people angry.  Sometimes you can't avoid them, but keep them to an absolute minimum.

(7) Pick your audience and stick with it.  
General send-outs are a total waste of time.

(8) Where you can, avoid satisfaction surveys. 
More effective solutions for monitoring satisfaction is looking at objective measures.  For example, count how many complaints come in and how many are resolved within a specific time. 
Set up a system to catalogue every complaint, something that most laboratories never do. All those telephone and hall-way gripes are complaints and they need to be included.

You may not think they were important, but the person who mentioned them did.

Friday, November 29, 2019

A truly successful Quality Moment.



Every two years we put on a POLQM Laboratory Quality Conference here in Vancouver.  The overarching theme is always the same - what’s new in laboratory quality for British Columbia and Canada and beyond.  We focus on topics like updates on key ISO standards (like ISO15189 and ISO22870) and on understanding risk for medical laboratories (ISO 22367) and medical devices.  All important topics for laboratorians to know.
This year we had an additional theme on “Meeting the Needs” with particular reference to Crosby and his definition of Quality as Meeting Requirements and the Measurement of Quality as the Price of Nonconformance, which we modified to the Costs and Consequences of Poor Quality, underscoring that all too often it is the customer who pays the consequence of our poor quality.
We had a lot of information on today issues like Quality Control of Cannabis and Impact of Gender Diversity on laboratory services, and the role of patients, and caregivers in the education of health professionals and learning the skills of Leadership.  Plus much more.
From my experience putting on conferences is NOT a money generating activity. If we break even we consider that a success.  If we lose a little or gain a little that is our target.  (The university is pretty clear that we are not-for-profit, but we are certainly not-for-loss!!).  If we take in a lot of money, that usually means that I charged too much. 
What I enjoy from putting on conferences is the satisfaction of knowing that we contribute to quality education and quality improvement in a most immediate sort of way.  People get together, they talk, they question, they challenge, they make presentations and verbalize what they are interested in, and then they go back to home with new thoughts, new ideas, with a new enthusiasm to create a better care environment for healthcare professionals and patients and their families and the community.    It is a lot like our putting on our virtual classroom courses, but even more immediate and more intense.  It is the ultimate quality and improvement moment.
Those who attended shot forward in their appreciation of how much laboratory quality is advancing.  Those that did not, did not. 
First let me emphasize that with our activities we focus on those present; and spend little time thinking about those who did not.  But this time I feel compelled to comment a little to the negative. 
From three jurisdictions we heard about spending freezes in healthcare, with particular reference to cuts in staff education.  Lots of funding for leaders and administrators but none for staff education.  Different funding pockets we were told; very unfortunate we were told; financial crisis management we were told.  All of it BS.   It gives us pause when we think about the current status of patient care when institutions put such a low priority on continuing staff education and quality improvement.
The most significant saving grace we experienced  were staff members who traveled from afar to get to the conference, using their own funding and using their own vacation time to attend.  These are the people who will save healthcare in the future.  
We fatigue of the tiresome expressions of privilege and entitlement and arrogance in folks who should know better.  Laboratory improvement is NOT derived from the high price help.  It comes from the people who do the work of making laboratories better.
For people interested in seeing what we discussed at our meeting, visit https://POLQM.med.ubc.ca/2019-polqm-quality-conference/2019-conference-presentations/    after December 6, 2019.

When Quality Conferences end, Real Quality Improvement ENDS

Tuesday, October 1, 2019

Do student satisfaction surveys REALLY measure satisfaction?


The central message in Quality is monitor your customers and continually progress towards improvement.

I wish I could say that was an absolute truth in the which is in the arena of education and teaching, but in my observation, the best I can say is not so much.  I don’t think this is entirely a consequence of disinterest in wanting to set a quality agenda in teaching; it is also a lack of investigation, follow through and innovation.
It should also be clear that if your goal is only about customer satisfaction, then it is fair to say that you are stuck in the 1980s.  What the more appropriate goal is improvement which goes beyond satisfaction, or goes more under the current title of “customer delight”.  

I have raised this before.  Customer delight follows the model described by Kano, which talks about providing a service beyond the normal expectation, beyond satisfaction and creates a feeling of exceptional appreciation.  

To be fair, it is difficult to measure for satisfaction, if the sole tool are traditional student satisfaction surveys.  Surveys are at best marginal to credibly measuring satisfaction.  I created “Noble’s Rules” as a way to increase their potential.  But  even with the “Rules” surveys have nothing to offer for looking at  “customer delight”.   
When educators discovered surveys, either on-paper or on-line, they seemed like the perfect tool.  You create a bunch of questions, students answer them and you then can count the response.  If one teacher gets 7 As and 3 Bs and another gets 5As and 5Bs, the first must be better.  The problem is that most students soon learn there is little in the surveys for them.  This is a little game from which they quickly suffer survey fatigue and and boredom.  They all too quickly become robotic in their answers and far too predictable to be reliable.  Most students rank teachers on a 5-point scale with As or Bs most of the time, mainly because it is fast and easy.  Put down something else and then you get these other questions.  Too much work and not worth the effort.  

There are others who love to be outliers who feel empowered and throw in a few Cs and Ds.   Today we call this  the “twittering” of student surveys; the power of outliers when protected by anonymity.

If we really want to gather information, we need more objective, independent measures to determine if we are making progress.  

So let me tell you of a supplemental measurement tool that seems to be working for us to see if our audience likes what we are doing.  
In our certificate course for medical laboratory quality management, we do a lot of year-over-year update and revision,  Since few people (if any) take our course year over year, few are aware of how much the course changes over time. 
But when they finish the course they communicate with their organization manager or  employer and tell them about what they learned.  If they had a terrible experience, the message would likely be that the course was a waste of time. 
But what we are seeing is that organizations send us more and more candidates year over year.  This is happening in multiple provinces in Canada, and in a number of foreign countries.  Over the past 5 years our repeat business not only continues to occur, but many participants start registering earlier.  For example, this year our registrants started to come in early in September, with many coming from organizations who have sent people to us before.   

We see this as benefiting from shared information.  Participant A has a positive experience, and informs their colleague who then registers early to become Participant B, or perhaps, they inform their employer who this is inclined to send more workers to increase the pool of Quality trained persons.  Ultimately it is a Quadruple win: Participant A, Participant B, Employer and us.

So while we track individual opinions through satisfaction surveys, we can also track structural opinions by looking at where they come from, and were they likely coming by referral.
So here is my message:

·   (A) If you feel compelled to use student satisfaction surveys, be very skeptical of the information you gather

·        (B) If you feel you have no choice, at least improve your surveys with Noble’s Rules. 

·       (C) Better yet, find another indicator that is less subjective that satisfaction surveys, more independent and more measurable and more focused on structural issues of referrals.  (See Noble’s Rule (8)).