Wednesday, February 27, 2013
I have been doing a ton of reading about Risk Management recently. I find the concept intriguing. Risk has been a mathematical study for hundreds of years. It is intimately tied to statistical analysis and mathematical concepts including Game and Chaos theory. Risk is not only for gamblers, bankers and insurance dealers, it is a useful tool for assisting Quality Management in the decision process.
I suspect that the recognition of risk is innate in all, or at least most organisms. Certainly cats seem to go through a process of doing a full sensory scan when confronted with something that resembles food and then make a decision to eat or not eat. One sees the same behavior with fish, when they look at bait and swim by. I don’t know for certain what is going on their mind, but it sure looks like some form of risk assessment.
I suspect that the human reaction to risk has a huge overlay that distorts that innate process, sometimes in the direction of being Risk Seekers, but more often as Risk Reduction, and sometimes as Risk Aversion. Recently I had a meeting with some folks with Transport Canada who were very concerned about the level of risk associated with putting a small safely packaged box in an airplane that contains tiny amounts of bacteria for proficiency testing. Imagine is there were an in-air accident and my box crashed to the ground and my 100,000 bacteria were released. The risk would be severe. True, but I wonder how that would compare to the effects of the metal of a crashing airplane or the 1000s of litres of jet fuel, or the trillions of bacteria from each of the damaged human bodies.
After doing a lot of reading and reflecting I began to appreciate that Risk is something that every Qualitologist needs to know about. Here is the essence:
1. Risk is the effect of uncertainty on an outcome where uncertainty is the product of incomplete information. The more information the more one can calculate Risk, and by corollary, the less information that is available, the less certain one can be about Risk assessment.
2. Chaos theory says that for every effect there is a cause, but the cause may be so subtle that it is indistinguishable. There may be many causes that impact on the effect, making Risk predictions increasingly difficult.
3. What Donald Rumsfeld called “unknown unknowns” Frank Knight called “unmeasurable uncertainty”. Both make Risk prediction very difficult.
4. Some risk is inevitable because the “unknown unknowns” can have an effect at any time. Setting a policy of risk aversion will always fail.
5. Game theory says that accepting a risk level that avoids a big loss is more likely to be more successful than accepting a risk level that might lead to a big win.
6. As much as we talk about Risk in calculated percentages, Risk is at best semi-quantitative estimate.
7. The single most effective tool in the Risk toolbox is the creation of a Severity – Occurrence estimate. Outcomes that may be catastrophic in effect and occur frequently should be avoided, Outcomes that are negligible or trivial in effect and occur rarely if at all should be considered as not a problem. The challenges fall somewhere in between.
8. But the greatest risk occurs when people who don’t have the same innate ability as a cat or a fish use tools they neither know or understand and try to make predictions they cannot support or justify based less on information and more on incompetence.
Me bitter? You betcha.
Monday, February 18, 2013
Reading Quality Tea-Leaves in Laboratory Medicine Residency Training.
I have put on a Quality Management Seminar Series for Medical Laboratory Residents three times since 2007. As far as I am aware it is the only Quality Seminar series for Laboratory Residents in the Country. I have written about this before [see: http://www.medicallaboratoryquality.com/2011/02/resident-quality-seminars-as-adult.html].
Each time the series has covered the spectrum of Quality topics over a period of 7-8 hours over 3 or 4 days. We include topics like historical perspective, standards, error, Quality tools, Management Review and Leadership. It is not intended to be definitive course, but it is a start towards awareness. Some of the residents have started taking our 20 week course to get a much fuller Quality experience. I think that is a good thing, and I trust we will find that they think it is a good thing as well.
In the first session, I was a neophyte on putting on courses, and missed the opportunity to get much pre seminar or post seminar information. I fixed that the next series and improved the process again this time. Either we belief in Quality Improvement or we don’t.
There is one question that I have asked repeatedly, which is:
In your opinion, is the information in the Quality Seminar Series most apt to:
A: Your experiences as a resident in laboratory medicine
B: Your end-of-residency examinations
C: Your future career as a pathologist
D: None of the above.
Since we started I have seen each year that the most common response is always (C ) your future career as a pathologist, and that makes sense to me.
But to me, the most interesting results are that over the period the (A) option has increased each year and the (B) option has decreased, and no one has ever used the (D) option.
Now to be fair, we get to ask the question every two years, and to date it has been answered only 3 times, and the total sample of responders each year is small; so I know that trying to interpret these results is a lot like reading tea-leaves and it would be a BIG mistake to put too much stock in this, BUT…
I see the trend of increasing awareness of the value of Quality as part of the general pathology and laboratory medicine residency (training period) as both real and positive.
Here’s why. A quick visit to PubMed [http://www.ncbi.nlm.nih.gov/pubmed/] the search engine for scanning articles in the US National Library of Medicine found over 120 articles where the phrases “Pathology and Laboratory Medicine” AND “Quality Management” OR “Quality Improvement” were found. If I changed “pathology and laboratory medicine” to “laboratory testing” the number jumps to 15,634. If I use just the term “six sigma” I find another list of some 1,295. What that means is that any resident (trainee) who is regularly reading journal literature is seeing these words on a regular basis as part of the accepted peer-review literature.
Second, residents are commonly on the outlook for short term projects in which they can get involved. Quality Improvement initiatives such as Internal Audits or Quality Indicator studies fit well with that.
Third, some residents are becoming more aware of structured Quality Assessment including Proficiency Testing and Accreditation. Some are even getting the opportunity to participate in site visits.
And fourth, many of the old pathologists who neither knew or cared about all that stuff have been retiring and are being replaced with younger staff who know and understand the value that Quality programs can add to the dynamic laboratory. Getting rid of deadwood is always a good thing.
Lastly, in my experience, people interested in pathology and laboratory medicine are not well known for being shy or particularly “politically correct”. If they think something is a waste of time, there will usually be at least one person who will let you know. That none of them have chosen this opportunity to point out that the Quality stuff is junk by choosing the option D, suggests that positive awareness is underway. And that is a good thing.
We will see what happens when we run the course again in 2014.
Monday, February 11, 2013
The Risk – Quality – Innovation Dynamic
Paul writes this month in response to an ASQ survey of teens interested in pursuing studies in Science, Technology, Engineering and Mathematics (STEM) which indicated that “nearly half of students are afraid or uncomfortable about failing”; an interesting and maybe even concerning finding. To put this into some context, some survey detail was provided. Survey participation was by email invitation to American youth for a week, just after New Years 2013. The teen response group with the afore mentioned concern about failure was comprised of some 500 kids between the ages of 12 to 17 years.
Count me as suspicious.
My first concern is that most 12 year olds would be in or around grade 6, most have either not started, or just started going through puberty. Many would still call them children. Seventeen year olds, on the other hand, are either high school juniors or seniors some (but certainly not all!) of whom are well on with developing a perspective of a world beyond themselves. It is hard to believe that those two groups would have anything in common, especially any notion or sense of academic failure. Second, in my experience, many (most?) teens despite angst about acne and popularity and Facebook still see themselves as pretty much invincible.
So while I haven’t had the opportunity to look at the study design thoroughly, let me just say that I would be uncomfortable making very many generalizable comments from a small population sample size. Said another way, recognizing the ISO definition of risk as the effect of uncertainty or lack of information on an activity or outcome, accepting the study’s headline as truth would be pretty risky business.
Having said all there is another question asked and that is about how well or unwell Quality folks deal with failure in their own situation.
Over the last while I have been writing on this theme from different directions (see: [ Invention and Innovation and “new knowledge” ] and [Even committedQualitologists can make mistakes ].
It strikes me that the Qualitologists fit into a special place in enterprises; on the one side we can create the conditions to reduce failure that results from the problems associated with insufficient information and excessive error. And on the other side we can help optimise the conditions that allow for innovation to flourish. Use of Quality tools and expertise can help determine what our customers want and need and what will work for them (think Satisfaction). Also by promoting a culture that embraces Quality and PDSA and Continual Improvement can help promote the processes that lead to opportunity (think Improvement through Innovation).
All this got me thinking about Paul’s January A View from the Q “How Do You Define Quality?”.
So let me provide another definition for Quality:
Quality is the field of knowledge and action that provides the dynamic interface to reduce the hazards of Risk and optimize the opportunities for Innovation.
Does that fit with your job description?
Wednesday, February 6, 2013
Even committed Qualitologists can make mistakes. Darn.
This is the 10th season of our virtual classroom on-line course, and one might think that we have the process down pat. Each year we review the information from the previous year, retain what is still relevant and appropriate, and make revisions where needed.
Every year we make a few “big changes”. This year it is a new Module on modern tools for Quality and an additional Quiz. These “big” things take a little more time and attention and care. And that is where I messed up. (Again, Darn!).
Quiz 1 is completed at the end of Module 1. It is an on-line auto-graded multiple choice exercise of 10 questions that should take no more than 30 minutes. We allot an hour.
When the Quiz 1 was completed, the grades were surprisingly and disappointingly low. Considering the pre-selection process we undergo to ensure we have the right participants, they should have done better. Once we confirmed that the problem was not a computer grading technical error, or a connection error or a transcription error, we had to go back further to when I wrote and set the questions and responses.
On one question, I had defined a wrong answer choice as the correct response, thus everyone answering correctly got a wrong result flag. This was annoying because the right answer was obvious and apparent. I had just messed up.
Additionally, two other questions were so subtle, so nuanced, that even I who had set the questions could barely figure them out. No wonder we were getting some very unhappy messages.
I remember clearly sitting in my office and setting the questions. I remember the process was being slow and arduous. The quiz muse was staying away and I was struggling. Then I had a rush of ideas and got the questions completed in a hurry. Clearly the rush was not necessarily lucid.
Message to self: Hurrying is a bad thing.
A decision was made that we could not let the quiz stand, if possible without inconveniencing the participants. This was completed through adjusting the auto-marking software and re-evaluating the response. The marks climbed to exactly the level we anticipated. Then we informed the participants of the error that had taken place and described the remediation process and told them to recheck their results. Then I had a discussion with the coordinator on how I would try to avoid making the same mistake of pushing to finish. I have created a check list that all quiz and examination writers will need to go through before questions can be submitted for final posting.
Four lessons learned:
Setting quiz questions without deliberate and sufficient re-check and conformation time is an unnecessary risk taking procedure than increases potential error production.
Remediation and Correction takes a lot more time than error causation.
Quiz errors hurt program credibility
This simple hurry-up slip has been a TEEM loss event. The gross cost to us has been 6 hours of emails and discussions and IT labor shared by 3 people, plus we have had to make the check-list and revise our procedure manual. And then we need to do the preventive thing and re-check Quiz 2 and the final examination to make sure that we (I) didn’t mess them as well. And to that we have to add my frustration and a whole bunch of participant unhappiness.
Still, having a detection - remediation – correction and prevention process that can and does pick up and analyze (study) mistakes and amend (act) them “expeditiously” makes the point that Quality works.
Sunday, February 3, 2013
Last week I had the opportunity to make a presentation to a group of laboratory sciences graduate students. My topic was on Quality, a topic that I suspect was pretty marginal in their sphere of knowledge or interest.
I had two agendas; first that they should be aware that Quality can be viewed both only as a subjective characteristic based on market influenced notions of value and craftsmanship, and specialness, and at the same time as an objective measurable based on specifications, and requirements, and commitment. The second was to introduce to them not only that Quality in the objective sense has a role to play in every research laboratory, but to go further, the absence of Quality awareness makes everything that they think and do, null and void.
I started with the notion that there are tiers of Quality, starting from the base of Quality Control, then Quality Assessment, and finally Quality Management.
To be fair, I acknowledge that not all laboratories will attain a level of achievement that includes a full Quality Management System. Many clinical laboratories have at best a perfunctory Quality Manual and a pretty iffy Document Control system or Process Control and unfortunately most clinical laboratory directors do little Management Review. But I am also aware that these are by-and-large completely absent in research laboratories unless their funding agency demands that they demonstrate they follow Good Manufacturing Practices (GMP).
Many researchers scoff at the notion that they should participate in any form of Quality Assessment, thinking that exclusively means some form of proficiency testing or inter-laboratory comparison, and they forget about the simple basics of internal audit, and competency assessment, that let them know whether anything is being done the way that they think it is supposed to be done.
But the sad reality is that even the most basic Quality Control is by-and-large absent. All too often basic assessment of equipment and reagents and supplies through use of control materials barely occurs, and when it is done, it rarely is used critically. There is little use of control charts (sometimes referred to as Levey-Jennings) and a near complete absence on its interpretation.
I was not alone in the room being knowledgeable on the subject. One of the seminar leaders lobbed be a really good softball question. Is there not value in reproducibility as a reflection of accuracy? If a value is tested multiple times and the same value is achieved, doesn't that give credibility to the value? Tempting, but sadly, not. What it did was open up the conversation on accuracy versus precision and bias. It reminds me of an age-old description of surgeons, confident, quick, adept and wrong.
Research is a critical part of health progress, and to be fair there has been great progress of the last couple hundred years. But investigation is slow and erratic, and expensive beyond expensive. For every step forward, there are dozens of steps back. That is the nature of unraveling new knowledge. But we make the situation worse and not better when graduate students don’t know of understand about the roles of Quality Control and Competency. I would argue that for every laboratory that uses a standard piece of analytic equipment, there should be availability of programs that provide materials to ensure that the equipment is being used properly. You could call that a variation of proficiency testing or competency assessment. I would also call it common sense. I would further argue that an aware funding agency would require the regular use of such challenges, perhaps even tying funding continuity to performance.
Our universities and training centres have an obligation to teach and guide graduate students to mold them into being excellent investigators. When we leave the very basics of Quality out of the experience and equation, we are just perpetuating our folly, not fulfilling our obligations for investigation improvement.
We have another session with the group next week.