Monday, February 5, 2024

795,000 Serious Diagnostic Errors

 

Over the last many years there has been a wide interest in medical and diagnostic error.  One can see why.  When a person is sick, or worse, when the sick one is their child or significant other,  the first thing you want is good and immediate reliable accurate and timely care.  It would be nice if that was what happened all the time.  Usually it does, sometimes it does not; sometimes it does not… but that is a topic for the next time!

The healthcare community is clearly concerned for all the right reasons to try and understand why things go amiss.   As much as there are too many events of long waits in the ER and too few family physicians, problems tend to occur infrequently, especially in countries with a well developed healthcare system.  Sometimes, probably most times when thing go badly (again, this is a rare event), it is because of person-error resulting in a missed on wrong diagnosis.

In my own studies I was able to look at provincial medical laboratory errors recorded within the healthcare system by physicians, and laboratory workers.  Because of my own knowledge and experience in laboratory quality assessment I was able to confirm the common finding that most errors in laboratory testing occur before the sample ever gets to the laboratory.  Those are really problematic because they samples get tested seemingly without difficulty or error, but the information arising can be wrong and misleading. 

The other observation, unfortunately, was that many in-laboratory errors never get reported, sometimes because people were too busy, and other times just because people chose not to report.  Some bizarrely even used the reporting process for getting others in trouble by reporting errors under another person’s name!)

This month, a group in the United States wanted to put a number on just how many errors occur that result in serious harm.  They did some looking at government and hospital records and did some interesting but definitely iffy arithmetic and came to the conclusion that some 795,000 diagnostic errors result in serious harm to patients each year.  In their paper they made two important comments, one being that in the US there are about 1 billion healthcare visits each year and the likelihood of a serious harm befalling an individual patient was about 1 in 1,000. 

They also acknowledged that their work was largely crude and a gathering of a variety of information and pointed out that much of their information is based on data that is composed of largely unverifiable estimates.  In my own mind it was much less an estimate and much more a crude  guestimate.

But all that will not matter.  Already the media has jumped to saying that people are at peril because there almost a million diagnostic errors every year in the United States.  It provides the news sharers the next new opportunity to tell everyone their very existence at jeopardy.   (For those interested, visit my blog entry on April 28, 2020… “We are all going to DIE”).

We unfortunately live in a society that is disturbingly comfortable with developing and using bad information for its own purposes.   It puts, in my opinion, an additional obligation on serious writers and investigators to ensure that their information does not get abused and misused.

In my opinion these authors have gone through an interesting exercise by, in their own words, pursuing a “novel technique” of gathering disparate information from a bunch of places, and put the information together to see if they could come up with a value that could be used as a form of marker for future study. 

By their own acknowledgement the results are an estimate.  Personally I suspect their calculation is too crude, but as they say in Australia … “Goodonya!”

There are better techniques these days for doing mass data analysis, including AI.  While the authors did not mention trying that, I would be surprised if they did not give it a shot.   

In the meantime, when you read about the near million serious errors that occur each year, before you get too healthcare phobic, put it in context.

 

Newman-Toker DE, Nassery N, Schaffer AC, Yu-Moe CW, Clemens GD, Wang Z, Zhu Y, Saber Tehrani AS, Fanai M, Hassoon A, Siegal D. Burden of serious harms from diagnostic error in the USA. BMJ Qual Saf. 2024 Jan 19;33(2):109-120.

 

Restelli V, Taylor A, Cochrane D, Noble MA. Medical laboratory associated errors: the 33-month experience of an on-line volunteer Canadian province wide error reporting system. Diagnosis (Berl). 2017 Jun 27;4(2):79-86.

 

 

 

 

 

 

 

 

 

 

Tuesday, January 23, 2024

Artificial Intelligence and The Future of External Quality Assessment -

 

 

I have been away for a while.   I apologize for the delay in writing.

M

I have been watching with some interest all the activity that surrounds Generative AI through using ChatGPT4.  The idea that you can ask a computer a question and it scans the electronic datasphere and gathers a whole bunch of “information” and then compresses it into a number of sentences which can be read in a form of basic interpretable English intrigues me.  It took little time for me to understand why there is so much interest (and fear) about this new tool.

Since it is readily available at a kind-of-affordable cost, I figured I should learn a little more about what it can and cannot do.

Being of a certain age, the first thing I did was buy the book “AI For Dummies”.  For my generation there is a “Dummies” book for almost every topic you can possibly want.  It was not perfect, but then again what could I expect?

The next thing was to jump in with a question: "What is the future of laboratory external quality assessment?"

Almost as soon as I got my fingers of the keyboard, ChatGPT4 started to respond.  It took all of maybe 5 seconds.

What I got back impressed me to the extend that the response was on-topic which means that it was framed around what I understand as  laboratory external quality assessment.

The response was broken down into sections which included:

1.    Integration of artificial intelligence (AI)  to revolutionize the medical laboratories by improving accuracy, speed, and efficiency.   It would do this through analyzing data, and assist through interpreting test results, and help identify potential errors or inconsistencies.

2.    Digitalization and remote testing: The increasing digitalization of healthcare could lead to a shift from traditional paper-based proficiency testing to online platforms.

3.    Remote proficiency testing could allow laboratories to participate in testing programs from anywhere, reducing the need for physical samples and paperwork. This would streamline the testing process, increase accessibility, and facilitate real-time performance monitoring.

4.    Personalized proficiency testing: As healthcare moves towards personalized medicine, proficiency testing may also become more tailored to individual laboratory needs. Laboratories may receive customized samples or challenges that align with their specific testing capabilities or areas of expertise. This approach could help laboratories identify and address any weaknesses or gaps in their performance more effectively.

5.    Continuous quality improvement: Proficiency testing programs are likely to focus more on continuous quality improvement rather than simply measuring a lab's performance. This could involve providing laboratories with detailed feedback and recommendations for improvement based on their performance in proficiency testing. Collaborative learning networks and knowledge sharing platforms may play a role in facilitating continuous quality improvement among laboratories.

6.    Regulatory changes and standardization: Changes in regulatory requirements, such as the implementation of new quality management systems or accreditation standards, may affect the future of proficiency testing. Standardization of proficiency testing methods and metrics across different regions and countries could improve comparability and harmonization of laboratory performance assessment.

Some of this I found interesting, but with apologies to the gurus of artificial intelligence, while the subject was correct, for a lot of the response the tense was wrong.  Most of it dealt with what is the near or distant past, or was just wrong.  

 For example while there is interest in virtual microscopy that can be on-line, for the microbiology and chemistry and blood banking, it is difficult to imagine that test competency can be monitored without physical samples.

Also the notion of personalized proficiency testing has been a fact  for many years.  Laboratories already select the companies, and the sample products they want to receive.  While there may be some refinements, they will likely be minor.

What does sound interesting and maybe even futuristic, the use of EQA to monitor knowledge and performance on continuous improvement and knowledge of regulation changes, is already in place in some EQA programs already.  In our program we call that para-examination EQA. 

So here is what I have learned…  computers have reached a new point where they are able to access the whole datasphere and process large amount of data on a wide range of specific topics here and now.  Their memory systems can be trained to look for specific words and patterns, frame them in a new way and restructure it into something different.   And present that is maybe new and novel, but not ready to take over the world.

Interesting? … Yes.   Helpful?...  In some ways.  Definitive?  ….  Not Yet.

It reminds me of another new thought (????) roaming across the drivelsphere.   "… have a vision of what can be, unburdened by what has been.

 

 

 

Monday, September 4, 2023

Teaching Points about EQA.

 

I was going through some of my past teaching files and came across a presentation that I put together for a guest presentation in a class at the local Institute for Technology in 2016. I had already been active in proficiency testing for many years.  Often when I I look at back at my teaching  wish I had done it differently.  This time is different.  I think the comments were actually spot on, and are still both relevant and insightful. 

These were  students in their first year of training to become laboratory technologists.  Some had some university background, but most were direct from high school.  I am not sure where or how they picked up the idea that working in a medical laboratory might be a good job. 

Few had any prior to medical education or laboratories, and I’m pretty sure that none of them had any prior education or training in the concepts of Quality. 

My first task thus was to establish a familiar foundation to which which they could relate.

“Most science laboratory students are familiar with Proficiency Testing, but probably under another name.  Often they are given samples by their teacher, which  they are expected to test, to prove that they have learned a procedure properly and are able to perform tests correctly.  This is usually referred to as receiving and testing “unknowns”.  If “unknowns” are important during training, they are even more important during professional clinical practice.”

Then I tried to make the point the more often than not when mistakes do happen, they are usually not the person’s fault.  They happen.

“Proficiency testing is used for detecting a broad array of potential errors.  Sometimes there are human errors that are made as a result of slips, or distractions, or mistakes as a result of misinterpretations or misunderstandings.  Sometimes, people forget procedures or are unaware of changes because they have been away because of illness or extended vacation or are off regular laboratory rotations.  Other times errors are more systemic, the result of silent problems with equipment or reagents, errors or confusions within the procedure.

Students need to understand that while making errors can happen, they are always a big deal, and can put lives at risk.

But, physicians and patients don’t care about laboratory problems; they  want, indeed expect and demand the information they receive to be the right answer, accurate and interpretable and relevant and on-time.  Quality assessment  tools, including proficiency testing allow the laboratory and appropriate authorities to be aware of competencies on a regular basis.”

Next I introduced to the students the concept of Quality through Assessment, Prevention and Improvement. 

Proficiency Testing is an important component of the total Laboratory Quality Assessment  which also includes accreditation, internal audits,  quality indicators and customer satisfaction surveys.  Each Quality Measure gives a view of an aspect of the whole organization.  Proficiency Testing can challenge every aspect of laboratory performance including  every phase of laboratory performance (pre-examination, examination, post-examination) as well as activities that surround testing such as safety, quality control (peri-examination)… every phase of laboratory performance”

I wanted to warn them about the consequences of not taking Proficiency Testing seriously. 

“With regret, some laboratories staff do not understand or appreciate why they are expected to participate in proficiency testing by authorities or accreditation bodies.  Some of them may even try to suck the value out of PT/EQA through deception (sending the sample to a reference laboratory for investigation) or overwork (not processing the sample with their usual procedure or with their usual workers), and worst, failure to Investigate why and how errors occurred.”

 

I summed up with…
“In Summary, Proficiency Testing is an internationally embraced Laboratory  Quality Assessment measurement tool similar in purpose as a grown-up extension of student “unknowns”.   It is a  technique that lets laboratories  demonstrate their quality and competence and at the same time provides the opportunity to discover silent system problems that lead to laboratory error.  Proficiency Testing can and should be a valuable  part of laboratory life for the full extent pf a laboratory based career.”

As I look back now, I’m pretty pleased with the message I put forward, now some 7 years ago.  As I recall the class response was positive.   I wish I had had the opportunity to keep in contact and later survey the students over the intervening years, to see if any part of the message actually stuck.