Featured Post

Healthcare Customer Satisfaction: More Talk AND More Action

Healthcare Customer Satisfaction: More Talk AND More Action Customer satisfaction (Voice of the customer) is a recurrent th...

Saturday, October 31, 2015

Measuring a Successful Conference



Measuring a Successful Conference



Peter Drucker, the famed author of ideas and books in the arenas of management and leadership is often quoted for “what is measured is improved”.  Clearly a concept that resonates because there exists many variations, including “what you don’t measure, you can’t manage; and what you can’t manage, you cannot improve”.  

For the last 2 and a half days (October 28-30 2015) we have held our 5th Program Office for Laboratory Quality Management Conference.  It is one of our major on-going activities, important to our program mission, important to our role in laboratory leadership.  If there is an activity we need to measure, this would be one.

So how can we measure conference success?

Attendance and revenue are two obvious measures, but each has its own inherent weaknesses.  Satisfaction surveys are also a tool with certain value.  But let me argue some other measurements that we consider.

Total attendance in relationship to expected attendance. 
Our original plan was to reach a total of 100 people, including sponsors, presenters and attenders.  We missed our target attendance by 15 per cent which was a disappointment.  One of our target groups (a local health authority) reduced their participation by 25 people.  We made up our audience with more people from across Canada and international visitors.  So the impact of local authority folks not participating was diminished on our conference.  

I rate our attendance goal as 4 out of 5

Diversity of audience.
By every measure we met our goal of diversity.  We had folks from almost all provinces in Canada and people from Oman, South Africa, India, and the United States.  We had folks from the public sector, and importantly from the community health laboratories.  We had laboratory decision makers, leaders, consultants, and students, and international health experts.  

I am rating our diversity as 5 out of 5

Participation of audience
We can measure participation in two ways; first through active discussion during round-table sessions, and second through participation in the last session in relationship to the first.
At the end of each presentation session there was a round-table where all the presenters and the moderator had an open discussion on the theme and then opened the session to audience participation.  Every open session ran the full length of planned time and every one had to be respectfully stopped to stay of conference schedule.  So that reflects full engagement.

And the attendance of the last session (Friday at 4:00 PM) had 90 percent of the attendance of the first session (Wednesday at 7:00 PM) which suggests that folks did not get bored and drift off.

I am rating Participation as 4.5 out of 5 (0.5 off for the 10% drop)

Compliments/Complaints ratio
Thinking in terms of ambiance, hotel experience, food and entertainment, quality of discussion, and the audience expectations, there are many many opportunities for comment.  During the full session I have received 8 unsolicited compliments and no complaints.  This is separate from the satisfaction survey for which we have not had our responses counted.

I am rating the C/C ratio as 5 out of 5.  

Follow through opportunities.
Since the conference we have had 3 invitations for new shared activities and two new invitations for presentations.  We consider this as a measure of success and interest 

4.5 out of 5.

So overall I am rating our success as 23 out of 25; a rating that I am quite happy with.  We attracted a diverse interested engaged audience that clearly enjoyed the meeting.  We will find out once we get the survey results back if they felt they had learned new knowledge.

 Clearly we have learned that we cannot and should not depend on the local health authority for participation.  For our next session we will focus more aggressive energy in our areas of success abroad.  We have found that we can do that. 
If locals want to participate they will, or not.  Spending a lot of time on encouraging them to participate is not productive.  

I will continue to write on the conference for the next few days.
M

Monday, October 26, 2015

A question of academics in the "blogosphere"

Recently my university has been in the news for a very complicated problem that involves an unhappy university president, and faculty member, and the university chancellor.  (For those who do not hang around universities, the Chancellor in corporate language is sort of the Chairman of the Board meaning that the position has power, but not in the same way as the president or CEO).

Not to get into the weeds too far, the president was just new, brought in with lots of fanfare and promise, but then suddenly quit in less than a year.  The faculty member decided to speculate on why he quit in her blog using less than appropriate language and inference to which the Chancellor took offense.  Then the faculty member in less than sterling behavior whined that by calling her on the inappropriateness of her blog speculations, that the Chancellor had intruded on her "academic freedom", which in turn then stole all the oxygen from the real story (ie why did the president really quit) and turned it onto this "poor me victim" drivel.  [ You can probably tell where my personal sympathies lie and don't lie].  

To bring this sad and complex story to an end,  the whole mess was reviewed by a very august retired judge who gave an opinion that what a university academic writes in a blog constitutes protected thoughts and should be afforded the same academic freedom protections.

If you enjoy in getting into the weeds more (and why would you?) you can Google "UBC President Resigns".

But my Question is, does everyone believe that what a person writes in a personal blog is some how sacrosanct and rises to the level of protected speech?  

In the US, I suspect that many would say that all opinion is indeed sacrosanct, and should be considered as protected under the first amendment as "protected speech" .  I may be a horse's ass with thoughts appropriate, but as long as I don't threaten anybody, what is written in a blog should be accepted "free speech".  I can live with that.

But it seems to me that academic freedom implies something different.  It implies some some of intellectual effort, that is associated with the creation of new knowledge, new insights, and some sort of intellectual rigor.  If you want to use your blog as an alternate vehicle from the traditional peer review process to hasten dissemination, you can do that.  That doesn't mean that the audience is honor bound to accept it as truth, but if it has the dimensions of new knowledge and some degree of rigor, you probably can make the argument that this is your academic opinion, and while you may be criticized for it, no one can prevent you from writing.

But most blogs that represent solely opinion and little or no substance and nothing resembling structure, opinion or new knowledge, are something else.  Blogs are more often about the writer's desire to write, than about creating information for others (did someone say narcissistic?) My blog is my opinion, good, bad or indifferent, but I personally would not consider it as something particularly special. I have never considered adding my blog entries into my curriculum vitae (resume), but I will include some as references in literature if I am trying to make a point.

But now that an august judge has decided that all these writings are indeed sacrosanct and protected, maybe I will have to consider them with more respect.

Horrors !!!




Tuesday, October 20, 2015

The Future of Proficiency Testing




In 1946 Sunderman learned the hard way that medical laboratories in Pennsylvania were not meeting their customers’ needs.  This was about 25 years before Crosby and his ground breaking definition of Quality (meeting customer requirements), but Sunderman understood that if clinicians felt compelled to send samples to multiple laboratories to get enough answers to collect and mean, then the laboratories were probably not generating credible values.

In order to sort out what was going on, he and his coinvestigator created a bunch of simulated samples and set them to the laboratories for testing.  Unfortunately what he discovered was that the clinicians were right, the laboratories were wrong and the laboratory information was crap strongly resembling a grand scale scattergram.  What followed was the development of a formalized Proficiency Testing scheme which eventually spread around the world.  

 That was a good thing because it introduced a new level of Quality assessment that has benefited laboratories greatly by making the processes of sample examination more rigorous and standardized.
 
But the world does not stand still and medical laboratory sample examination has become mechanized and computerized and quality controlled to the point that the machines are rarely wrong, and when they are they are smart enough to shut down reducing the risks of machine error.  So Proficiency Testing especially for machine generated laboratory data has become arguably redundant.   

Proficiency Testing for machine generated data has become a statistical exercise looking at things like bias and uncertainty, neither of which has much to do with laboratory proficiency and competence.  

That does not mean that there are no more laboratory errors; indeed there continue to lots of errors, most of which are the consequence of human foible such as some distraction, some mistake, some procedure or protocol error.  

And I strongly argue that human foible should be the focus of attention for Proficiency Testing going forward.

When we look at laboratory error, what we regularly see is that most reported errors are in the pre-examination phase, poor samples, wrong samples, insufficiency samples, contaminated samples.  In the examination phase we should be looking at least as closely at how the Quality Control was done as the testing outcome.  And in the post-examination phase we should be looking at things like what kind of report was created and how it was generated and to whom it was sent.  And there are other areas of focus including Quality Management procedures and Safety procedures and Transport procedures and Autoclaving practices.  

All these issues can easily be tested by Proficiency Testing methods.  In some situations they will be different from the traditional methods.  That is where the innovation and creativity becomes part of the play.  

So here is my warning:  Keep doing what we always do and Proficiency Testing will become progressively less relevant and more inappropriate and just plain irrelevant and wrong.  

Change or die.

More to come.


Sunday, October 18, 2015

Organizing a Quality Conference



Organizing a Quality Conference

I was watching a professional golf tournament this afternoon.  Given a choice I would rather play golf, but when that is not possible, I enjoy watching golf.  I like golf because it is first and foremost a personal activity with a direct reward system.  You play well and you are rewarded;  you play poorly and you are not.  But there is an important caveat;  you can play well but still not win because there can always be another player who not only played as well as you, but also managed to score better and got an even bigger reward.  


Despite what you do, there are always additional factors that you cannot control, but have an influence on outcome.  Call them luck, or Karma, they are part of the process.  They fit into the Knightian model as immeasurable risk or what Donald Rumsfeld called the “unknown unknowns”.   They always lead to the sense of gamble and excitement of every decision we make.
The same thing holds true when it comes to putting together conferences. 
In two weeks we will be hosting our fifth Medical Laboratory Quality Conference in Vancouver.  (see : http://polqm.ca/conference_2015/home.html) .  In many regards this meeting has the best plan that I have ever put together.  


We have reviewed our experiences from the past and found the things that we can do better.  We have have tapped into all the right topics and have brought together a brilliant set of speakers.  We have a great venue in a great city and have a great time schedule.  We did a great promotion program, and most people that I wanted to inform about the meeting have heard about it and have seen the conference website.   
Importantly we have found the ways to reduce our costs in a way that do not impede on the things that are important to our sponsors, our speakers, or our audience.

 I have absolute confidence that everyone who attends will comment on what a great meeting it was. 


Everything that I can do to make the meeting a great success, I have done.
But there is one thing that I cannot control and that is getting people to “pull the trigger” and sign up at the conference registration site (again at: http://polqm.ca/conference_2015/home.html).  And in that is the final piece where luck, karma, or devine intervention come into play.  


The reality is that the economy is not what it could or should be and there are lots of companies are cutting back on supporting conferences and sending people to conferences.   There are many people who would like to attend and participate but for whom air travel and hoteling is very expensive.    Personally, when we looked at our crystal ball 18 months ago I would have guessed that we would be further along the road to economic improvement than we are today.  


But we will see how many decide to attend.  I know that we have enough resources to cover our expenses and so at our absolute baseline, we are covered.  It is mostly around having enough of an audience to generate a culture of enthusiasm  and contribution.  


Regardless of what happens now, I know that those folks who attend will come away with a lot of pearls.  For them it will be truly an inspiring two-and-a-half days.   


The rest is beyond my control.