Sunday, November 8, 2015

The State of North America Laboratory Quality 2015



The State of North America Laboratory Quality 2015

Earlier this week I was invited to participate in the 2015 edition of Robert Michel’s and Dark Daily’s annual Laboratory Quality Confab held in New Orleans.  I was the third speaker in a symposium on the challenges facing US laboratories.  The lead speaker was Ellen Gabler, a journalist with the Milwaukee Journal Sentinel who had written a number of articles on the questionable quality of what the Commission on Medicaid and Medicare Services (CMS) and the Clinical Laboratory Improvement Act (CLIA) refer to as unwaived tests, i.e. tests that are so easy to perform correctly they require no quality assessment, and on the state of laboratory inspection (accreditation).  [in a nutshell… big glaring problems abound with both].   

The next speaker was Elissa Passiment, the Executive Vice President for the American Society for Clinical Laboratory Science and importantly the head (former) of the Clinical Laboratory Improvement Advisory Committee (CLIAC), the body that reviews activities associated with CLIA and reports to Congress.  Elissa talked about the challenges that CLIA has posed, and continues to pose to US laboratories.  And as she put it CLIA has become a driver to the bottom rather than an inspiration to excellence (Those aren’t quite Elissa’s words, but certainly the sentiment!).

I talked about Quality Partners (a favorite subject of mine) [see: http://www.medicallaboratoryquality.com/2014/12/competition-and-quality-partner-dynamic.html ] , emphasising both their strengths and weaknesses and how laboratories need to progress through the development of Quality Progress Plans, basically a page out of the newly published crown document of organizational quality, ISO9001:2015. 

I thought the symposium went well, although my guess is that it was way over the heads of most of the audience.  While I give them all points for wanting to be at a laboratory quality conference, for 80 percent of the audience this was their first time attending.  More importantly, and without putting too much importance on show of hand displays during conference presentations,  when I asked how many worked in laboratories where they regularly performed internal audits or if they maintained an Opportunities For Improvement list, for both questions I got about a 10-15 percent response.  So their notions about quality are not really yet on the foundations of quality improvement.

Of interest to me, following my presentation, Robert Michel asked me in front of the audience with microphone in hand, how I felt about Canada being a country with provincial control over laboratory quality and would we do better if we had CLIA. 

My response was that it is a tragedy that in Canada our federal government has never engaged in a process, voluntary or mandatory to insist on an across-the-country approach to laboratory Quality, leaving a true hodgepodge mess.  But on the other hand I thank my lucky stars that we do not have to put up with the tired and obsolete mess that the US Congress has made with CLIA.  

 If there is a true and singular tragedy it is that in 1967 the US Congress create a ground breaking approach through the creation of a quality concept, but over time, with the introduction of lobby power over quality power, and political expedience over any interest in patient care, CLIA has fallen from being a force for quality to one of the exact and very opposite.  CLIA today almost guarantees Quality-by-luck rather than Quality-by-design.

Even a quick look at CLIA rules today points to the absence of quality assessment for the vast majority of tests, the absence of any semblance of clinical appropriateness requirements, test acceptability tolerances that one could drive a Mack truck through, and a series of silly regulations about who you can and cannot talk too, or how many times you can or cannot test a proficiency testing sample as if any of that is monitorable or has anything to do with Quality.  What it speaks to is how poor regulations lead to gaming rather than improvement.  

Tired, broken, lack of innovation.  Not at ringing endorsement.

Given the choice, between Canadian hodgepodge and American tired-obsolete, I think if I really needed to have a laboratory test performed today, I would have far greater confidence on the accuracy, quality, and interpretability of the test performed in a Canadian laboratory.

Today we have a new government in Canada with a 4-5 year majority mandate (in other countries we would call that a benevolent dictatorship), and perhaps a new opportunity for a new beginning.  Perhaps this is the right new time to start communicating with my federal government contacts.

Saturday, October 31, 2015

Measuring a Successful Conference



Measuring a Successful Conference



Peter Drucker, the famed author of ideas and books in the arenas of management and leadership is often quoted for “what is measured is improved”.  Clearly a concept that resonates because there exists many variations, including “what you don’t measure, you can’t manage; and what you can’t manage, you cannot improve”.  

For the last 2 and a half days (October 28-30 2015) we have held our 5th Program Office for Laboratory Quality Management Conference.  It is one of our major on-going activities, important to our program mission, important to our role in laboratory leadership.  If there is an activity we need to measure, this would be one.

So how can we measure conference success?

Attendance and revenue are two obvious measures, but each has its own inherent weaknesses.  Satisfaction surveys are also a tool with certain value.  But let me argue some other measurements that we consider.

Total attendance in relationship to expected attendance. 
Our original plan was to reach a total of 100 people, including sponsors, presenters and attenders.  We missed our target attendance by 15 per cent which was a disappointment.  One of our target groups (a local health authority) reduced their participation by 25 people.  We made up our audience with more people from across Canada and international visitors.  So the impact of local authority folks not participating was diminished on our conference.  

I rate our attendance goal as 4 out of 5

Diversity of audience.
By every measure we met our goal of diversity.  We had folks from almost all provinces in Canada and people from Oman, South Africa, India, and the United States.  We had folks from the public sector, and importantly from the community health laboratories.  We had laboratory decision makers, leaders, consultants, and students, and international health experts.  

I am rating our diversity as 5 out of 5

Participation of audience
We can measure participation in two ways; first through active discussion during round-table sessions, and second through participation in the last session in relationship to the first.
At the end of each presentation session there was a round-table where all the presenters and the moderator had an open discussion on the theme and then opened the session to audience participation.  Every open session ran the full length of planned time and every one had to be respectfully stopped to stay of conference schedule.  So that reflects full engagement.

And the attendance of the last session (Friday at 4:00 PM) had 90 percent of the attendance of the first session (Wednesday at 7:00 PM) which suggests that folks did not get bored and drift off.

I am rating Participation as 4.5 out of 5 (0.5 off for the 10% drop)

Compliments/Complaints ratio
Thinking in terms of ambiance, hotel experience, food and entertainment, quality of discussion, and the audience expectations, there are many many opportunities for comment.  During the full session I have received 8 unsolicited compliments and no complaints.  This is separate from the satisfaction survey for which we have not had our responses counted.

I am rating the C/C ratio as 5 out of 5.  

Follow through opportunities.
Since the conference we have had 3 invitations for new shared activities and two new invitations for presentations.  We consider this as a measure of success and interest 

4.5 out of 5.

So overall I am rating our success as 23 out of 25; a rating that I am quite happy with.  We attracted a diverse interested engaged audience that clearly enjoyed the meeting.  We will find out once we get the survey results back if they felt they had learned new knowledge.

 Clearly we have learned that we cannot and should not depend on the local health authority for participation.  For our next session we will focus more aggressive energy in our areas of success abroad.  We have found that we can do that. 
If locals want to participate they will, or not.  Spending a lot of time on encouraging them to participate is not productive.  

I will continue to write on the conference for the next few days.
M

Monday, October 26, 2015

A question of academics in the "blogosphere"

Recently my university has been in the news for a very complicated problem that involves an unhappy university president, and faculty member, and the university chancellor.  (For those who do not hang around universities, the Chancellor in corporate language is sort of the Chairman of the Board meaning that the position has power, but not in the same way as the president or CEO).

Not to get into the weeds too far, the president was just new, brought in with lots of fanfare and promise, but then suddenly quit in less than a year.  The faculty member decided to speculate on why he quit in her blog using less than appropriate language and inference to which the Chancellor took offense.  Then the faculty member in less than sterling behavior whined that by calling her on the inappropriateness of her blog speculations, that the Chancellor had intruded on her "academic freedom", which in turn then stole all the oxygen from the real story (ie why did the president really quit) and turned it onto this "poor me victim" drivel.  [ You can probably tell where my personal sympathies lie and don't lie].  

To bring this sad and complex story to an end,  the whole mess was reviewed by a very august retired judge who gave an opinion that what a university academic writes in a blog constitutes protected thoughts and should be afforded the same academic freedom protections.

If you enjoy in getting into the weeds more (and why would you?) you can Google "UBC President Resigns".

But my Question is, does everyone believe that what a person writes in a personal blog is some how sacrosanct and rises to the level of protected speech?  

In the US, I suspect that many would say that all opinion is indeed sacrosanct, and should be considered as protected under the first amendment as "protected speech" .  I may be a horse's ass with thoughts appropriate, but as long as I don't threaten anybody, what is written in a blog should be accepted "free speech".  I can live with that.

But it seems to me that academic freedom implies something different.  It implies some some of intellectual effort, that is associated with the creation of new knowledge, new insights, and some sort of intellectual rigor.  If you want to use your blog as an alternate vehicle from the traditional peer review process to hasten dissemination, you can do that.  That doesn't mean that the audience is honor bound to accept it as truth, but if it has the dimensions of new knowledge and some degree of rigor, you probably can make the argument that this is your academic opinion, and while you may be criticized for it, no one can prevent you from writing.

But most blogs that represent solely opinion and little or no substance and nothing resembling structure, opinion or new knowledge, are something else.  Blogs are more often about the writer's desire to write, than about creating information for others (did someone say narcissistic?) My blog is my opinion, good, bad or indifferent, but I personally would not consider it as something particularly special. I have never considered adding my blog entries into my curriculum vitae (resume), but I will include some as references in literature if I am trying to make a point.

But now that an august judge has decided that all these writings are indeed sacrosanct and protected, maybe I will have to consider them with more respect.

Horrors !!!




Tuesday, October 20, 2015

The Future of Proficiency Testing




In 1946 Sunderman learned the hard way that medical laboratories in Pennsylvania were not meeting their customers’ needs.  This was about 25 years before Crosby and his ground breaking definition of Quality (meeting customer requirements), but Sunderman understood that if clinicians felt compelled to send samples to multiple laboratories to get enough answers to collect and mean, then the laboratories were probably not generating credible values.

In order to sort out what was going on, he and his coinvestigator created a bunch of simulated samples and set them to the laboratories for testing.  Unfortunately what he discovered was that the clinicians were right, the laboratories were wrong and the laboratory information was crap strongly resembling a grand scale scattergram.  What followed was the development of a formalized Proficiency Testing scheme which eventually spread around the world.  

 That was a good thing because it introduced a new level of Quality assessment that has benefited laboratories greatly by making the processes of sample examination more rigorous and standardized.
 
But the world does not stand still and medical laboratory sample examination has become mechanized and computerized and quality controlled to the point that the machines are rarely wrong, and when they are they are smart enough to shut down reducing the risks of machine error.  So Proficiency Testing especially for machine generated laboratory data has become arguably redundant.   

Proficiency Testing for machine generated data has become a statistical exercise looking at things like bias and uncertainty, neither of which has much to do with laboratory proficiency and competence.  

That does not mean that there are no more laboratory errors; indeed there continue to lots of errors, most of which are the consequence of human foible such as some distraction, some mistake, some procedure or protocol error.  

And I strongly argue that human foible should be the focus of attention for Proficiency Testing going forward.

When we look at laboratory error, what we regularly see is that most reported errors are in the pre-examination phase, poor samples, wrong samples, insufficiency samples, contaminated samples.  In the examination phase we should be looking at least as closely at how the Quality Control was done as the testing outcome.  And in the post-examination phase we should be looking at things like what kind of report was created and how it was generated and to whom it was sent.  And there are other areas of focus including Quality Management procedures and Safety procedures and Transport procedures and Autoclaving practices.  

All these issues can easily be tested by Proficiency Testing methods.  In some situations they will be different from the traditional methods.  That is where the innovation and creativity becomes part of the play.  

So here is my warning:  Keep doing what we always do and Proficiency Testing will become progressively less relevant and more inappropriate and just plain irrelevant and wrong.  

Change or die.

More to come.