Monday, February 5, 2024

795,000 Serious Diagnostic Errors

 

Over the last many years there has been a wide interest in medical and diagnostic error.  One can see why.  When a person is sick, or worse, when the sick one is their child or significant other,  the first thing you want is good and immediate reliable accurate and timely care.  It would be nice if that was what happened all the time.  Usually it does, sometimes it does not; sometimes it does not… but that is a topic for the next time!

The healthcare community is clearly concerned for all the right reasons to try and understand why things go amiss.   As much as there are too many events of long waits in the ER and too few family physicians, problems tend to occur infrequently, especially in countries with a well developed healthcare system.  Sometimes, probably most times when thing go badly (again, this is a rare event), it is because of person-error resulting in a missed on wrong diagnosis.

In my own studies I was able to look at provincial medical laboratory errors recorded within the healthcare system by physicians, and laboratory workers.  Because of my own knowledge and experience in laboratory quality assessment I was able to confirm the common finding that most errors in laboratory testing occur before the sample ever gets to the laboratory.  Those are really problematic because they samples get tested seemingly without difficulty or error, but the information arising can be wrong and misleading. 

The other observation, unfortunately, was that many in-laboratory errors never get reported, sometimes because people were too busy, and other times just because people chose not to report.  Some bizarrely even used the reporting process for getting others in trouble by reporting errors under another person’s name!)

This month, a group in the United States wanted to put a number on just how many errors occur that result in serious harm.  They did some looking at government and hospital records and did some interesting but definitely iffy arithmetic and came to the conclusion that some 795,000 diagnostic errors result in serious harm to patients each year.  In their paper they made two important comments, one being that in the US there are about 1 billion healthcare visits each year and the likelihood of a serious harm befalling an individual patient was about 1 in 1,000. 

They also acknowledged that their work was largely crude and a gathering of a variety of information and pointed out that much of their information is based on data that is composed of largely unverifiable estimates.  In my own mind it was much less an estimate and much more a crude  guestimate.

But all that will not matter.  Already the media has jumped to saying that people are at peril because there almost a million diagnostic errors every year in the United States.  It provides the news sharers the next new opportunity to tell everyone their very existence at jeopardy.   (For those interested, visit my blog entry on April 28, 2020… “We are all going to DIE”).

We unfortunately live in a society that is disturbingly comfortable with developing and using bad information for its own purposes.   It puts, in my opinion, an additional obligation on serious writers and investigators to ensure that their information does not get abused and misused.

In my opinion these authors have gone through an interesting exercise by, in their own words, pursuing a “novel technique” of gathering disparate information from a bunch of places, and put the information together to see if they could come up with a value that could be used as a form of marker for future study. 

By their own acknowledgement the results are an estimate.  Personally I suspect their calculation is too crude, but as they say in Australia … “Goodonya!”

There are better techniques these days for doing mass data analysis, including AI.  While the authors did not mention trying that, I would be surprised if they did not give it a shot.   

In the meantime, when you read about the near million serious errors that occur each year, before you get too healthcare phobic, put it in context.

 

Newman-Toker DE, Nassery N, Schaffer AC, Yu-Moe CW, Clemens GD, Wang Z, Zhu Y, Saber Tehrani AS, Fanai M, Hassoon A, Siegal D. Burden of serious harms from diagnostic error in the USA. BMJ Qual Saf. 2024 Jan 19;33(2):109-120.

 

Restelli V, Taylor A, Cochrane D, Noble MA. Medical laboratory associated errors: the 33-month experience of an on-line volunteer Canadian province wide error reporting system. Diagnosis (Berl). 2017 Jun 27;4(2):79-86.

 

 

 

 

 

 

 

 

 

 

Tuesday, January 23, 2024

Artificial Intelligence and The Future of External Quality Assessment -

 

 

I have been away for a while.   I apologize for the delay in writing.

M

I have been watching with some interest all the activity that surrounds Generative AI through using ChatGPT4.  The idea that you can ask a computer a question and it scans the electronic datasphere and gathers a whole bunch of “information” and then compresses it into a number of sentences which can be read in a form of basic interpretable English intrigues me.  It took little time for me to understand why there is so much interest (and fear) about this new tool.

Since it is readily available at a kind-of-affordable cost, I figured I should learn a little more about what it can and cannot do.

Being of a certain age, the first thing I did was buy the book “AI For Dummies”.  For my generation there is a “Dummies” book for almost every topic you can possibly want.  It was not perfect, but then again what could I expect?

The next thing was to jump in with a question: "What is the future of laboratory external quality assessment?"

Almost as soon as I got my fingers of the keyboard, ChatGPT4 started to respond.  It took all of maybe 5 seconds.

What I got back impressed me to the extend that the response was on-topic which means that it was framed around what I understand as  laboratory external quality assessment.

The response was broken down into sections which included:

1.    Integration of artificial intelligence (AI)  to revolutionize the medical laboratories by improving accuracy, speed, and efficiency.   It would do this through analyzing data, and assist through interpreting test results, and help identify potential errors or inconsistencies.

2.    Digitalization and remote testing: The increasing digitalization of healthcare could lead to a shift from traditional paper-based proficiency testing to online platforms.

3.    Remote proficiency testing could allow laboratories to participate in testing programs from anywhere, reducing the need for physical samples and paperwork. This would streamline the testing process, increase accessibility, and facilitate real-time performance monitoring.

4.    Personalized proficiency testing: As healthcare moves towards personalized medicine, proficiency testing may also become more tailored to individual laboratory needs. Laboratories may receive customized samples or challenges that align with their specific testing capabilities or areas of expertise. This approach could help laboratories identify and address any weaknesses or gaps in their performance more effectively.

5.    Continuous quality improvement: Proficiency testing programs are likely to focus more on continuous quality improvement rather than simply measuring a lab's performance. This could involve providing laboratories with detailed feedback and recommendations for improvement based on their performance in proficiency testing. Collaborative learning networks and knowledge sharing platforms may play a role in facilitating continuous quality improvement among laboratories.

6.    Regulatory changes and standardization: Changes in regulatory requirements, such as the implementation of new quality management systems or accreditation standards, may affect the future of proficiency testing. Standardization of proficiency testing methods and metrics across different regions and countries could improve comparability and harmonization of laboratory performance assessment.

Some of this I found interesting, but with apologies to the gurus of artificial intelligence, while the subject was correct, for a lot of the response the tense was wrong.  Most of it dealt with what is the near or distant past, or was just wrong.  

 For example while there is interest in virtual microscopy that can be on-line, for the microbiology and chemistry and blood banking, it is difficult to imagine that test competency can be monitored without physical samples.

Also the notion of personalized proficiency testing has been a fact  for many years.  Laboratories already select the companies, and the sample products they want to receive.  While there may be some refinements, they will likely be minor.

What does sound interesting and maybe even futuristic, the use of EQA to monitor knowledge and performance on continuous improvement and knowledge of regulation changes, is already in place in some EQA programs already.  In our program we call that para-examination EQA. 

So here is what I have learned…  computers have reached a new point where they are able to access the whole datasphere and process large amount of data on a wide range of specific topics here and now.  Their memory systems can be trained to look for specific words and patterns, frame them in a new way and restructure it into something different.   And present that is maybe new and novel, but not ready to take over the world.

Interesting? … Yes.   Helpful?...  In some ways.  Definitive?  ….  Not Yet.

It reminds me of another new thought (????) roaming across the drivelsphere.   "… have a vision of what can be, unburdened by what has been.