Showing posts with label satisfaction surveys. Show all posts
Showing posts with label satisfaction surveys. Show all posts

Saturday, June 11, 2011

Monitoring Satisfaction through Noble's Rules


In the laboratory business we have always thought it was all about the science and not about the business.

But we were wrong.
ISO as well as WHO and CLSI (and before them, Deming and Crosby) all acknowledge the importance of “Customer satisfaction”.
It is not so much that the customer is always right, but that the customer should always have a voice and should be heard. There is an expectation to have some form of customer input on a regular basis, perhaps as often as once a year.

The reason that the standards development bodies have included this as a requirement and the basis for policy is because it doesn’t matter if you are an academic providing a course, or a laboratory providing documented information, or a manufacturer providing umbrellas, or a proficiency testing provider, or an equipment and reagents supplier, if your customers are not happy, then bad things start to happen.

In the private product or service sector that probably means customers stop coming. And that becomes the business killer.
 
In the public sector laboratory, the customer may not have a choice of which laboratory they have to use, but that won’t stop complaints, reputation slurs, increased threat of litigation. (Incidentally, this applies to accreditation bodies as well.)
Sooner or later you risk becoming the interest of the public and the media.  

Or even worse, think about the embarrassment and humiliation of a public inquiry.

All of those are major career killers.

So what to do. In the business world, the godsend solution for customer satisfaction has become the on-line survey. It is so easy to create an on-line survey and send it out to all your important customers. So easy, in fact, that it has become too easy. 

Anyone foolish enough to give your email address to a hotel or car-rental or restaurant gets inundated with surveys. We have become a world of survey send-outers and survey send-inners, and most of it is a waste of time.

Most surveys are poorly designed; are way too long, too complex, and far too diffusely focused. If a survey takes more than 2 -3 minutes to complete, you can guarantee that either it will not be completed, or will be completed with junk information. 


Also, you have to remember that responders  always have their own bias one way or another,and probably have interpreted the questions in ways that you never dreamed of. Creating most surveys has become high risk of being counter productive for addressing customer satisfaction. As they say “Fast, easy, slick and wrong”.

If you still feel compelled to resort to surveys, spent some time at setting them up so that you might get some information that you can consider. (We call that PDSA) . 


After years of learning the hard way, I figured out a set of simple rules  that anyone interested in developing a Satisfaction Survey can follow.  I arrogantly coined them as Noble's Rules for Successful Satisfaction Surveys.  

They don't guarantee success, but not keeping them in mind will pretty much guarantee failure.


(1) Focus them to a single issue.
The more you try to pack into a survey, the worse it gets.  Pick a topic and get out.


(2) Ask the question that needs to be asked, even if you may not like the answer.  
It’s very easy to create surveys that will always give you positive feedback by simply avoiding any potentially controversial or challenging issues, but how can you study or learn what people think if you don’t open up the discussion.




(3) limit the survey to only a few questions , best is to keep it to 5-6 and NEVER more than 10, and make them as uncomplicated as possible . 
Get in, ask a few questions, and get out.  Don't give them a chance to get bored.

(4) make sure that it can always be completed in 3 minutes or less. Boredom is a guarantee for incomplete surveys loaded with random nonsense answers.  It would be better if they didn't send the response in, because the nonsense becomes pollution and the pollution leads to terrible interpretation. 



(5) Pre-test the questions to reduce (you can never avoid) ambiguity. 
Make your questions VERY simple.  Confusing questions get confusing answers.



(6) Avoid requiring an answer. That is the other  guaranteed invitation to bogus information. 
Making people answer questions, makes people angry.  Sometimes you can't avoid them, but keep them to an absolute minimum.


(7) Pick your audience and stick with it.  
General send-outs are a total waste of time.


(8) Where you can, avoid satisfaction surveys. 
More effective solutions for monitoring satisfaction is looking at objective measures.  For example, count how many complaints come in and how many are resolved within a specific time. 
Set up a system to catalogue every complaint, something that most laboratories never do. All those telephone and hall-way gripes are complaints and they need to be included. 


You may not think they were important, but the person who mentioned them did.

Thursday, December 23, 2010

Message to self

Let me start by saying that I do a lot of surveys. To date my account has 82 surveys completed and an additional 3 currently active.  I survey students regularly during courses, and annually we do at least one customer satisfaction survey for CMPT.  I have done surveys for the International Organization for Standardization (ISO) and for International Laboratory Accreditation Cooperation (ILAC).  Over the years I have become adept at creating surveys that address the issues that I want addressed.

Last week's survey was my first experiment of linking a survey to a discussion website like MMLQR.  I would not call it a totally successful experiment. 

When I look at the reported results, first I noted that we are attracting a variety of laboratory Quality professionals from Canada and internationally.  There are some positive trends.  Based on a 6-point Likert scale, this site was ranked either as Excellent or Good by near everyone with regard to variety of topics, relevance, clarity, accessibility, and refreshment.  There were no "poor" or "unacceptable" responses.  The same was true for the Overall assessment.

So this is all good, Yes?
Well it provides documentation that supports impressions based on the progressively increasing readership, and it confirms that the people that I am interested in engaging in conversation aer finding the site.  But based upon the number of tracked page views, it looks like less than 4 percent of people connecting to MMLQR have responded to the survey.

With the information that I can garner, I don't know how many people opened the survey but chose to anwer no questions, but I assume that that is a very small number

So there is a problem, but a generalisable problem.  Almost all the surveys I have done in the past have been to a closed or faily closed population, where I could go back to the group and try again and again.  This is a survey to an open population.  In that regard it is similar to attempting to do a satisfaction survey of people exiting a laboratory patient service centre, or of physicians that use laboratory services.  
In all these situations, one can generate a denominator of how many potential responses there could have been.  The challenge is how does one increase the numerator without generating bias, both positive and negative.

In the laboratory setting, one might try creating focus groups as definable groups, but one has to create incentives to garner participation.  This is potentially expensive and would only be applicable if one risked confidentiality breaches.  One could combine focus groups with electronic surveying, but still one would have to have identifiers to work with.
In this setting, I have no access to identifiers, and no obvious inducements that might entice a response.

So it is back to the drawing board with some questions to be asked, like who do I want to attract to the survey, and how can I entice them to participate, and what is a sufficient cluster, and what kinds of questions will capture the information that I want.  And maybe to affirm why I want to generate the information.

It's kind of like my own PDSA

If I come up with some answers I will try again.
In the meantime if you are in the 4%, many thanks for participating.

In the meantime, I am going to take a few days off and come up with my predictions and resolutions for a happy and Quality 2011.

For those of you who celebrate the day, Merry Christmas.

m