Thursday, June 30, 2011

Final Summary of the Workshop 2011

I have mentioned before my ambivalence about satisfaction surveys and their real usefulness, but my reality is that they are almost a personal addiction.  If I don't get to do a survey every couple of months I start to show all sorts of nervous twitches (not really!). But at least I keep things under better control these days with my rules:

  1. 1.    Focus them to a single issue
  2. 2.    Limit the survey to only a few questions , best is to keep it to 5-6 and NEVER more than 10, and make them as uncomplicated as possible .
  3. 3.    Pre-test the questions to reduce (you can never avoid) ambiguity and
  4. 4.    Make sure that it can always be completed in 3 minutes or less.
  5. 5.    Never require an answer. That is a guaranteed invitation to bogus information.
  6. 6.    Decide in advance which slice of your audience you are interested in and then only focus your energy on that group. General send-outs are a total waste of time. 
  7. 7.    Don't ask a question if you don't know what you are going to do with the information. (I forgot to mention that one before).


Note that 7 (above) is not about not asking questions where you might be afraid of the answer.  It is about not gathering information that you don’t know how to use or analyze.

So with those rules in mind, we did an exit survey of the participants of our Quality Weekend Workshop.  We focused our questions in 2 areas, General Organization and Workshop Presentations

The requested response was a check mark on a Likert Scale with 1 as the least positive (poor satisfaction) and 5 as the most positive (high satisfaction).  We also left room for optional comments.

The survey was optionally anonymous and took about a minute to complete, unless the person felt compelled to leave notes and comments.  Quality oriented folks LOVE to leave notes and comments.  It is in our DNA.

So here are the results.  In total we got a response from about 60percent of participants.  We generally encouraged participation but did not solicit from anyone, or discourage anyone.  (See Table).  Satisfaction was pretty high with some concerns about the registration fee, and maybe some issues about the length of presentations.
I suspect that the registration fee may have been a tad high, and we will address that next time, although we did not get to survey people who chose to not attend.  Maybe they would not have attended even if the workshop was free!


With respect to the comments on the workshop, we added all the comments to a single table and then submitted them to Word Cloud analysis, a semi-quantitative techniques where the most common words become most central and largest in the picture. 

We interpret this as a positive cloud and consistent with the tabular data.

So the bottom line was an enjoyed interactive meeting with high positives, and some concerns about fees and a cold room.  And we did not lose any money in putting it on.  So I call that a success.

Planning for 2012 and 2013 is already underway and we the conference PDSA cycle begins again.  

M

PS: we have taken the conference website down, but the information and the presentations are all available on www.POLQM.ca

Monday, June 27, 2011

Two sides of the Clinical - Laboratory divide.

I saw with appreciation Charlie’s reminiscences of having phlebotomy duties and how it was such an important connector to the patient behind the bar label.  I agree.  It was then and could still be today an important way to link the disconnected laboratorian back to the reason that many of us got involved in health care.   In many respects it was a triple-win strategy.  The laboratorian collected the sample they needed for testing having connected directly with the patient.  Since the laboratorian was more aware of the specifics with respect to sample volume and timing and transport conditions, the patient benefited from the increased likelihood of a correctly collected sample.  This created the conditions for minimized risk of pre-examination error.

My background was similar, but from the opposite direction and with the exact opposite effect because it create the greatest risk for examination error, over and over and over.  In the sixties’ and seventies’ and a little into the eighties’ hospitals seeking to keep their laboratory budgets down required interns and residents to perform laboratory tests, rather than have technologists do them, especially in the Emergency Rooms (ER) and in the evenings and on weekends. 

The ER would have a broom-closet converted to a cubby-hole laboratory with a centrifuge, a microscope, a urinometer, and a hemoglobinometer and a counting chamber.  Usually the room was full of tons of junk and debris and old samples dating back for the several weeks, creating an indelible odour of stale urine and God knows what else.   The available stain reagents were contaminated and loaded with crystal shards.  There we would do our own urinalysis, and haemoglobin and white counts and gram stains.  On evenings we would plant our own urine cultures. 

I know that this sounds like much less than what we see the house staff doing on the television show House, but it was a total waste of time.  (It is probably an equal waste on House as well!)  First none of us had any real training, and even if we did, the microscope and centrifuge and urinometer were so badly cared for, the chance that we could perform any test accurately was essentially zero.  And what was scary was that we actually used to make clinical decisions based upon those results.   The risk of examination error was near 100 percent.
The justification for all this was said to be that we were learning skills invaluable for being a clinician, but the real reason was that we were cheap labour.  And the clinical consequences of erroneous testing never showed on anyone’s radar.    Fortunately over time the cubby-hole reverted back to being broom closets. 

Over the last 10 years we have seen these mini labs return with a vengeance with more-and-more point of care testing.  We are told that the situation is totally different because the equipment is self-controlling and “idiot-proof”.  Not only that, but there are international standards (ISO 22870:2006) that dictate the appropriate quality requirements when operating point-of-care equipment.  Maybe that’s all true, but if facilities have not learned the lessons of ensuring competency and training for point-of-care testing, then they deserve all the incumbent risks.  At our recent Quality Weekend Workshop, one of our presenters showed data demonstrating the bias and the degree of deviation on duplicate testing with several point-of-care instruments.

Manufacturers sell on simplicity of use, but they also work from a quality of use perspective.  It is not in their interest when poor usage by inexperienced or untrained personnel gets results that lead to negative outcomes.

The message here is not that laboratorians should be venturing into the clinical arena but clinicians should stay out of the laboratory.  I see a too many advantages to addressing the gap between these two spheres of activity. 

But can we agree that training and competency and commitment to Quality have to not only be a part of the process, they have to be the process.



Thursday, June 23, 2011

Ariely and Astion - bringing meaning back to the laboratory worker

I am reading “The Upside of Irrationality” by Dan Ariely (2010.  HarperCollins Publishers) and came upon a chapter that struck a chord.  It was one of those very satisfying moments. 

Ariely is a psychologist and behaviour economist; one of those guys who works in experimental design to address basic issues in behaviour.  In this one study he was interested in what motivates people to work diligently. 

The experiments involves asking folks to build some complex characters using the toy building blocks Legos or playing word games on paper.   The study design is essentially giving money for completing the task (either building something or completing the word game) giving more money for completing, but giving but progressively less money each time it is repeated.  The variable is whether the person sees some additional recognition.  In one scenario, the person’s building or word game is saved with they name, in another their work goes into a pile, and in the third, their work is shredded immediately. 

Turns out that folks will repeat the task multiple times, even though they are making relatively less money as they continue if they see the characters are connected to them by name or by preservation.  When their output is ignored destroyed, they do less repetitions, and follow the procedure with less accuracy.   People work more and more accurately when their work is not ignored or dismantled.  People work better when their work has meaning.   

When we put it that way, there is no surprise here.  But this begs the question about working in a modern medical laboratory.  One person receives a requisition and enters data into a computer, over and over.  Another takes the sample, now identified by a bar code and puts it into a tube or cuvette over and over.  Another pushes a button on a machine that makes the machine test the sample.  And another transcribes the machine result into report.  If workers name is collected, it is not for recognition, but rather who to blame when there is a problem.  This is a system that might make Henry Ford or Frederick Taylor happy (2 guys that built assembly lines) but it is tough to find a lot of meaningful recognition in this type of work.

In the microbiology laboratory, the technologist today receives a set of plates that have been incubating overnight, disconnected from the sample, disconnected from the requisition, disconnected from any clinical information.   Total disconnection. 

At our POLQM Quality Weekend Workshop, Mike Astion from Seattle was presenting on human resource issues in the medical laboratory and discussed this very scenario.  He was talking about laboratory workers being disconnected from their work… bored and making mistakes.  We need to connect the technologist back to the real live and living patient.  We need to connect the technologist with a reason for feeling like they are contributing to patient care.  We need to have meetings where the laboratory people actually meet and talk with the patients.  We need to make the laboratory people feel connected and meaningful. 
We need to ensure that laboratorians are clinically relevant.

There are all sorts of ways to make the reconnection.  In one laboratory that I have visited in Tanzania, they have put patient’s pictures on the walls.  Clinical technologist positions send the technologist out to participate in ward rounds.  Clinical-laboratory conferences with patients participating. 
These are all old ideas that were common practice in the 1960’s.  It is time to bring them back.

I love it when information from many sources all comes together.






Monday, June 20, 2011

Quality Weekend Workshop - the day after

So this weekend we had our POLQM Quality Weekend Workshop and while I may be accused of being a little biased, I have to say that it was terrific.  I am sorry that many people that I would have liked to have attend the meeting were not able to be there.   On the other hand we got the chance to connect with some old friends that I did not expect to see. 

Too bad because we had brought together some of the most significant quality gurus from around the world, including Jane Carter from Keyna, Richard Zarbo and Michael Astion and Robert Martin and Luci Berte from the US, Dr. Elisabeth Dequeker from Belgium, and George Cembrowski and David Seccombe from Canada.  David Hardwick, a visionary laboratorian with international expertise in Planning and Research in Pathology and Laboratory Medicine gave an incredible presentation on the Medical Laboratory, Past, Present, and Future... the core message being continuation of the 250 year  steady state increase in knowledge and information at 2% compounded year over year. 

Denise Dudzinski gave a thought provoking presentation on ethical issues that surround disclosure of laboratory error  to physicians and patients.  Where is the balance point between the right to know and unncessary anxiety?  This was one presentation that will have me thinking for the next year.


What was so exciting to me about this meeting was the focus on Quality Partners, something that readers of MMLQR are very much aware, but a concept emerging from shadows to prominence in the medical laboratory world.  Between Bio-Rad, BD, CSA, and DAP and a number of proficiency testing providers, a huge light was shed. 

The second theme was the growing opportunity for quality positions in medical laboratories in Canada, and around the world.  Clearly the clinical healthcare community has awoken its interest and awareness in quality.  Quality teams, Quality managers, and Quality lead positions are becoming a MAJOR growth point. As the total number of positions in laboratories gradually decreases, the positions in Quality are highly likely to remain stable or increase. 

Jane Carter gave a brilliant presentation on the Quality activities that she is involved with in Kenya, and throughout eastern and southern Africa.    She and some of her staff were able to come to Canada on support from AMREF Canada, an international office of the African Medical Research Foundation.  (More on this later).


There was a lot of discussion about making the meeting a regular event on the annual calendar, and I have had some significant discussions with two potential partners.  We would have to think about it for a while. 

From a Quality perspective, we have had a lot of success which are seen in the responses to the satisfaction comment sheets.  On a scale of 1 (poor) to 5(very positive) the meeting rates somewhere near 4.8, which high marks for speakers, and theme, presentations, staff and catering.  The registration fee was down a bit (about 4.2) but on review it would be difficult to do too much about that.  The biggest complaint was that the lecture room air conditioner was too strong.   

On the opportunity for improvement side, the biggest issue that I have to work through is in marketing.  Despite what I thought was a wide distribution of notices, we were not very successful at attracting the size of audience that I wanted.  Some suggestions have been made (such as having the meeting on Father's Day weekend).  We will have to do some significant Study (as in PDSA) before we do the next one. 

Bottom line is that the meeting was a pretty positive experience, and we have a lot of reason to be pleased. 

We have a file of presentation precis which will be available at www.polqm.ca

M

PS
I have been asked to create the opportunity for email notification for posts to MMLQR.  You can register for that in the new box on the top right (FOLLOW BY EMAIL).  If you want to give this a try, this feed is fully confidential, and is not designed or intended for any activity other than email notification of posts. 

Friday, June 17, 2011

More Musings on Quality Partners.

I have been working harder on the concept of Quality Partners and have created a trial definition: 

The network of organizations that develop and promote and provide services and assistance with the goal of supporting an effective laboratory foundation that is conducive of better patient safety and care. 

It is a little long but as a definition I think it is not a bad start. It contains all the key elements. The partners are not laboratories but are organizations intended to work with laboratories. The partners do not create an effective laboratory, but support the efforts of those responsible for creating effectiveness. The partners do not directly improve patient safety and quality of care, but are conducive. Ultimately it is the head of the laboratory that makes the decisions that will make or break laboratory quality, but they must know that they do not have to make those decisions alone and without support.

The definition still has some weak points that need to be addressed. Some of the partners are engaged totally on a voluntary basis, others on a function of administration, and others by virtue of their regulatory or statutory function. And it does not incorporate the most important partner, the PUBLIC as manifested by the MEDIA, the Regulators, the Legislators, and the Litigators.

The same can be said about the Graphic. It is still a work in progress, but worth sharing if others want to get engaged in discussion. (see below).




Tomorrow we begin our POLQM Quality Weekend Workshop with Quality Partners as one of the two core themes. It will be interesting to see how this all plays out. If I am the only one in the room that thinks this is an important concept, I will feel badly. Not wrong; just badly.

The meeting should go well this weekend. I personally think it is going to be a GREAT meeting. I hope that some of the speakers are not too disappointed by the size of the group. There is nothing that I can do about it now, and indeed I think that we did every thing we could to draw a bigger crowd. But the time for recriminations is not now. It is after we have had the meeting, and after we have had the opportunity to do the “S” of PDSA. We need to study the information that we amass as and after the meeting transpires.

It become important because there already is dicussion about another meeting in 2002 or 2012 (more definitely).
More later.

Saturday, June 11, 2011

Monitoring Satisfaction through Noble's Rules


In the laboratory business we have always thought it was all about the science and not about the business.

But we were wrong.
ISO as well as WHO and CLSI (and before them, Deming and Crosby) all acknowledge the importance of “Customer satisfaction”.
It is not so much that the customer is always right, but that the customer should always have a voice and should be heard. There is an expectation to have some form of customer input on a regular basis, perhaps as often as once a year.

The reason that the standards development bodies have included this as a requirement and the basis for policy is because it doesn’t matter if you are an academic providing a course, or a laboratory providing documented information, or a manufacturer providing umbrellas, or a proficiency testing provider, or an equipment and reagents supplier, if your customers are not happy, then bad things start to happen.

In the private product or service sector that probably means customers stop coming. And that becomes the business killer.
 
In the public sector laboratory, the customer may not have a choice of which laboratory they have to use, but that won’t stop complaints, reputation slurs, increased threat of litigation. (Incidentally, this applies to accreditation bodies as well.)
Sooner or later you risk becoming the interest of the public and the media.  

Or even worse, think about the embarrassment and humiliation of a public inquiry.

All of those are major career killers.

So what to do. In the business world, the godsend solution for customer satisfaction has become the on-line survey. It is so easy to create an on-line survey and send it out to all your important customers. So easy, in fact, that it has become too easy. 

Anyone foolish enough to give your email address to a hotel or car-rental or restaurant gets inundated with surveys. We have become a world of survey send-outers and survey send-inners, and most of it is a waste of time.

Most surveys are poorly designed; are way too long, too complex, and far too diffusely focused. If a survey takes more than 2 -3 minutes to complete, you can guarantee that either it will not be completed, or will be completed with junk information. 


Also, you have to remember that responders  always have their own bias one way or another,and probably have interpreted the questions in ways that you never dreamed of. Creating most surveys has become high risk of being counter productive for addressing customer satisfaction. As they say “Fast, easy, slick and wrong”.

If you still feel compelled to resort to surveys, spent some time at setting them up so that you might get some information that you can consider. (We call that PDSA) . 


After years of learning the hard way, I figured out a set of simple rules  that anyone interested in developing a Satisfaction Survey can follow.  I arrogantly coined them as Noble's Rules for Successful Satisfaction Surveys.  

They don't guarantee success, but not keeping them in mind will pretty much guarantee failure.


(1) Focus them to a single issue.
The more you try to pack into a survey, the worse it gets.  Pick a topic and get out.


(2) Ask the question that needs to be asked, even if you may not like the answer.  
It’s very easy to create surveys that will always give you positive feedback by simply avoiding any potentially controversial or challenging issues, but how can you study or learn what people think if you don’t open up the discussion.




(3) limit the survey to only a few questions , best is to keep it to 5-6 and NEVER more than 10, and make them as uncomplicated as possible . 
Get in, ask a few questions, and get out.  Don't give them a chance to get bored.

(4) make sure that it can always be completed in 3 minutes or less. Boredom is a guarantee for incomplete surveys loaded with random nonsense answers.  It would be better if they didn't send the response in, because the nonsense becomes pollution and the pollution leads to terrible interpretation. 



(5) Pre-test the questions to reduce (you can never avoid) ambiguity. 
Make your questions VERY simple.  Confusing questions get confusing answers.



(6) Avoid requiring an answer. That is the other  guaranteed invitation to bogus information. 
Making people answer questions, makes people angry.  Sometimes you can't avoid them, but keep them to an absolute minimum.


(7) Pick your audience and stick with it.  
General send-outs are a total waste of time.


(8) Where you can, avoid satisfaction surveys. 
More effective solutions for monitoring satisfaction is looking at objective measures.  For example, count how many complaints come in and how many are resolved within a specific time. 
Set up a system to catalogue every complaint, something that most laboratories never do. All those telephone and hall-way gripes are complaints and they need to be included. 


You may not think they were important, but the person who mentioned them did.

Friday, June 10, 2011

A VERY busy week in advancing Quality in Canada (and elsewhere)

PART A: Preparing for QWW next week.

So we are in pretty good shape with one week to go before the POLQM Weekend Workshop. The speakers are ready, the venue is ready, the caterers are ready. Mind you I just read on Google that there might be an Air Canada strike next week, but that is something that I can not control. The participation pick-up has not been as huge as I would have wanted, but again there is not much that I can do about that., especially now. We have sent notices to near 1000 folks around the world through a network of posting sites, including Making Medical Laboratory Quality Relevant. But there will be time to review all the positives and challenges after the meeting.

If anyone is planning on making a last minute registration at the registration desk on next Friday (June 17) evening, please be aware that the desk will be available in the Medical Student and Alumni Centre starting at 5:30 PM and there will be a buffet dinner. Dr. Hardwick’s presentation starts at 7:00 PM.

For more information go to www.POLQMWeekendWorkshop.ca


PART B: Appraisal time.

We take the knowledge assessment for our UBC Certificate for Laboratory Quality Management Course very seriously. It has to be straight forward, clear and what educators call integrative, which means that responses can not just parrot back content from the course, but must tie multiple aspects together and provide examples in order to demonstrate that information has not only been obtained, but also that it has been understood in a working context. That is not something that can be easily tested in a multiple choice context, nor is it something that can be done under the pressure of time constraint.

One of the advantages of on-line courses and on-line examinations, is that we can take time constraint out of the picture because examinations can be done when the participant has the time to work on it. They are given a week to work through all the questions. They can submit their responses all at one time, or over a few days. The examination process is, in that sense, much closer to the working experience where decision making does not always have to be immediate. On the other hand. all the participants are adult learners with busy lives and “real” jobs., so they don’t necessarily have the time or inclination to let the course linger too long.

So how are they doing? Really well. The examination deadline ended at midnight last night and all the responses are in. So far I have seen and graded  over 100 answers and waiting on a few last minute stragglers. In my sections I am finding almost all responses are satisfactory and of that group about 20% are exemplary and of that group there is at least one person that would hire in a heartbeat.

This I expect has a lot to do with the high level of participants that seek out the course, and the degree to which they participate and interact throughout the course. That and the essential and unavoidable reality that the faculty is very knowledgeable and has combined to put together a course that provides valuable content and participates in a manner that promotes discussion. I can say that because I have read the student reviews. (More on the value of student reviews coming very soon).

By Friday we should have all the marking done and this year’s prize winner announced.

The winner gets free registration to the Weekend Workshop. if they are unable to attend I have a plan B ready, but not yet announced. 

Saturday, June 4, 2011

Stopped clocks and Uncertainty of Measurement

Uncertainty of Measurement in the medical laboratory is not all bad. Think of UM as being like the stopped clock which tells the right time twice a day.

To “calculate” the uncertainty of measurement one develops an uncertainty budget to determine the impact of all the steps that might have an impact on the variation in measurement values. While this may make some theoretical sense, the reality is that errors that occur especially in the pre-examination phase tend to occur randomly and intermittently and on a periodicity best described as irregularly irregular. Trying to make calculations on random events is almost impossible, unless one studies each step many many times, to get, as the statisticians say, a sufficient n value. And that is an unrealistic and bizarre expectation to be inserted into a standard that is treated as a requirement for accreditation.

So at what point can this be a good thing, even fleetingly? Well, I will tell you.
In the implementation of a Quality system, there are two processes that will make significant improvement to an organization. Some refer to this as CAPA as if the two are closely linked (which is incorrect).
The first is corrective action; once a problem is detected, it needs to be examined and analyzed for likely cause, and then be addressed to reduce the risk of repeated and ongoing weakness. This makes infinite sense, and is generally a pretty easy requirement for folks to understand, and is relatively easy to implement. Corrective actions are reactive responses to detected problems.

Preventive actions are different because they are more assertive or pro-active. One takes a process and looks at all its steps for potential weak points which could fail and result in error or failure. One is looking for potential opportunities for future error and preventing them from occurring. Quality system standards expect organizations to regularly take the time to not only be pro-active in preventing accidents, injuries, errors.
In order to plan a medical laboratory preventive action program, first you have to know what happens at each step along the way to the samples. You need to think about processes looking for potential error. In the world of risk management this process is call looking for potential failure modes and their effects analysis (also known as FMEA). And this step is essentially the first step that leads to developing an uncertainty budget. So while trying to calculate a value for uncertainty is generally a waste, we end up with information that we can use for other purpose.

Looking for weak points and addressing them is a useful exercise and links risk management, and quality management preventive action programs. It is described and recommended in a guideline (called a technical specification) published by the International Organization for Standardization as ISO/TS 22367:2008 titled “Medical laboratories -- Reduction of error through risk management and continual improvement”.

To come back to my point that CAPA is a poor acronym, most institutions find doing corrective actions relatively easy.  Doing preventive actions are tough.  Many organizations find them the toughest part about quality management.  Maybe approaching preventive actions by working through a budget approach will make them easier to develop. 

So in my world the following is clear: Requiring uncertainty of measurement in the medical laboratory is almost a total waste of time, with the singular exception that it does motivate us to take a closer look at individual processes with the view for finding potential weak points that we can amend through an active Preventive Action process.

Which brings me back to the stopped clock.

Thursday, June 2, 2011

No Uncertainty about Uncertainty

In the previous entry I made a comment about the reference to Measurement Uncertainty in the CRC Press book Quality Assurance in the Pathology Laboratory edited by Maciej Bogusz. I noted that the author's sole argument for laboratories to calculate their (more appropriately) uncertainty of measurement was because it was expressed as a requirement in the standard ISO17025. (It is also included in 15189).

I mentioned that in my opinion, there is NO reason for doing something worse than doing it solely because it is a requirement in a standard or that an accreditation body said to do it.

Here are some GOOD reasons why laboratories should perform certain processes and procedures:
1: We created a policy to which we are committed.
2: It is a legal requirement in the places that we work. Adherence reduces the risk of liability
3: It is a customer requirement and expectation.
4: It creates a better, safer work environment.
5: It amends an error and reduces the risk of repeat.
6: Adherence enhances our financial health.

The concept of uncertainty of measurement is an extensive of technical philosophy that no measurement is absolute. Factors including the precision and stability of equipment, consistency and quality of reagents, technical competency and reproducibility of operator skill, knowledge and talent can all influence the result of a measurement. All measurements should have some form of error bars around them.
I have absolutely no problem with that concept. This concept grew out of studies in the physical sciences where the most minor of minor error can result in a rocket being fired at the moon but hitting Mars instead, or where scatter within test results may get interpreted as cold fusion, or where the tiniest of deviations can result in huge alterations in interpretation of collisions in an atomic accelerator.
In all these situations the study and analysis can be under complete control it is appropriate to define the UM and take it into account during interpretation. I got it, I understand it, I believe it.

But here's a news flash. medical laboratories are not closed system research centers. Most of the life of a clinical sample is far beyond our control, and for all intents and purposes is likely to remain that way. The patient and the collection are almost always at a distance from the laboratory, and there are too many variables that can impact on the sample in a way that we can not control There is the technique of collection, the stability of the container and its contents (including specific additives). There is the temperature in the collection site, the duration of time for transport, the temperature at transport, the amount of agitation during transport. There is the amount of time the sample sits on a workbench before it is accessioned.

And those are the ones that we know. How about all the factors we don't know.

Generally have an idea or impression about the uncertainty and impacts of variables, but we have no way to calculate their impact. Metrologists understand this but their answer is that is OK we will just develop a list of all the variables that we can think of and make an ESTIMATE or QUESTIMATE on their value and on their impact. This is called the Uncertainty Budget. So now what we have done is taken a tool that was designed to calculate precise error bar values, but instead ended up with a best guess that may be close or not, may be valid or not, may be reproducible or not. But we have a Number and we can can now tell the accreditation team that we have a number (good), but then we go an tell our customer either as a patient or a surgeon or a cardiologist that we have a number (bad). What we don't tell them is that we have no idea of how much confidence we have in the value. We just give then the value.
That is what I can both misleading and DUMB.

So what should we do. Well we can do some things with confidence. We can test our analyzer repeatedly and we can plot the range of values that we get when we test a Certified Reference Material and we can calculate the analyzers range. Laboratories have done that for years and reported on test trueness, precision and bias. If we interpret quality control tests against that range we can say with confidence that the equipment has a certain allowable error range, and we can say with confidence that if we measured a sample concentration as 6.2 umol/L that the true value is somewhere likely between 6,12 and 6.28.
But that is not generating a value for uncertainty of measurement.

Here is what is so annoying about this. Anyone who has worked in a laboratory understands that calculating precision and bias is an important aspect of being a laboratorian. Anyone who has worked in a laboratory understands that making estimates or guesstimates for variables for which we have not basis for the estimate or guess is a fool's game. So why do we end up with standards with requirements that make no sense to the laboratorian.

It's because dumb things happen around the standards development, crafting, and negotiation table. and once something gets into the standard regardless of how inappropriate it is, and how wrong it is, it is almost impossible to get folks to acknowledge the error and actually fix it.