Let me start by saying that I do a lot of surveys. To date my account has 82 surveys completed and an additional 3 currently active. I survey students regularly during courses, and annually we do at least one customer satisfaction survey for CMPT. I have done surveys for the International Organization for Standardization (ISO) and for International Laboratory Accreditation Cooperation (ILAC). Over the years I have become adept at creating surveys that address the issues that I want addressed.
Last week's survey was my first experiment of linking a survey to a discussion website like MMLQR. I would not call it a totally successful experiment.
When I look at the reported results, first I noted that we are attracting a variety of laboratory Quality professionals from Canada and internationally. There are some positive trends. Based on a 6-point Likert scale, this site was ranked either as Excellent or Good by near everyone with regard to variety of topics, relevance, clarity, accessibility, and refreshment. There were no "poor" or "unacceptable" responses. The same was true for the Overall assessment.
So this is all good, Yes?
Well it provides documentation that supports impressions based on the progressively increasing readership, and it confirms that the people that I am interested in engaging in conversation aer finding the site. But based upon the number of tracked page views, it looks like less than 4 percent of people connecting to MMLQR have responded to the survey.
With the information that I can garner, I don't know how many people opened the survey but chose to anwer no questions, but I assume that that is a very small number
So there is a problem, but a generalisable problem. Almost all the surveys I have done in the past have been to a closed or faily closed population, where I could go back to the group and try again and again. This is a survey to an open population. In that regard it is similar to attempting to do a satisfaction survey of people exiting a laboratory patient service centre, or of physicians that use laboratory services.
In all these situations, one can generate a denominator of how many potential responses there could have been. The challenge is how does one increase the numerator without generating bias, both positive and negative.
In the laboratory setting, one might try creating focus groups as definable groups, but one has to create incentives to garner participation. This is potentially expensive and would only be applicable if one risked confidentiality breaches. One could combine focus groups with electronic surveying, but still one would have to have identifiers to work with.
In this setting, I have no access to identifiers, and no obvious inducements that might entice a response.
So it is back to the drawing board with some questions to be asked, like who do I want to attract to the survey, and how can I entice them to participate, and what is a sufficient cluster, and what kinds of questions will capture the information that I want. And maybe to affirm why I want to generate the information.
It's kind of like my own PDSA
If I come up with some answers I will try again.
In the meantime if you are in the 4%, many thanks for participating.
In the meantime, I am going to take a few days off and come up with my predictions and resolutions for a happy and Quality 2011.
For those of you who celebrate the day, Merry Christmas.
Just a thought:ReplyDelete
A survey posted right around vacation time (in this case close to christmas) may result in less participants not due to the survey itself, but due to outside factors such as vacation time, last minute shopping, etc.
Perhaps the same survey around, say February or March will yield higher participation results.
An informative article which is so important to know..!! I hope that this is the post which is so helpful to us. I am really so pleased to get this post article very much. Keep it up.ReplyDelete