Western medical care prides itself on being generally science based, so called evidence based. While there is a role for the so called art of care based on experience and feeling, the strength and power of medical care is predicated on the weight and influence of study and and analysis. As "scientists" we learn our knowledge from our own study and from reading of studies by others. This of course is based on the assumption that the material we are reading is worth reading.
While simplistic, there is commonality between Scientific Method and Deming Quality. One asks a question, creates an hypothesis, and establishes a method to collect and measure information (PLAN), then one does the measurements (DO) and evaluates the results (STUDY) and draws a conclusion and focuses on the next step of hypothesis (ACT). You might think that Deming was some sort of scientist. Having earned a Master's degree (U Colorado) and a PhD (Yale) n mathematics and physics supports that he was certainly inclined in that direction.
Unfortunately, much current (and past) medical information doesn’t bother with either science or quality. For example, a few days ago I received a copy of a the Journal of Infection and Public Health (Volume 3, Issue 4, 2010) with an article about urinary tract infections in a neonatal intensive care unit. The methodology of the study was to do a retrospective look at urine culture reports and distinguish infection into 3 categories: definite, possible and contamination, based solely on recovered bacterial count. This is what we call nonsense. There is so much wrong with this, it is hard to know where to start. The data was collected before the hypothesis. The data was not examined to get rid of confusers like prior antibiotics or other even more important pre-examination and examination factors that would influence the measurement. And the interpretations of the information collected were trivial and superficial, at best.
I am not sure whom I am more annoyed with, the authors who decided to create a crude examination of information based upon test results that were never designed or intended to be the basis of critical examination, or the reviewers that looked at this crap and said that it was sufficiently interesting that it should be published, or the journal publisher who was so obviously desperate for something to print that they printed the article in some vain attempt to give themselves some credibility and sustainability. That it was a from a Canadian institution in an international journal made it that much more embarrassing.
I have been around this block a long time, and I too have in the past written junk like this so that I could go to a meeting, or get an extra line on my cv. But all this accomplishes is that it creates noise, and diminishes the quality of reading time. And, as we are now ensconced in the 21st century, we should be learning from past mistakes, not emulating them.
So qualitologists have an obligation. Given a choice between structured cohesive manuscripts and junk, go for the former. Given a choice between structured cohesive manuscripts and not contributing to literature, go for the latter.
Its kind of like a variation on the old proverb, better to submit nothing for publication and have people think you have nothing constructive to say, than to submit junk and have them know.