Saturday, October 26, 2013

Saving Science: The Argument for Quality Assurance




About a year ago, I wrote on the subject of challenges in the medical research laboratory community, with particular reference to an article in a newspaper article and a published article by John Ioannidis.  [ See: Quanlity and the Research Laboratory  - http://www.medicallaboratoryquality.com/2012/12/quality-and-medical-research.html ]  The discussion continues with the October 19, 2013  edition of The Economist: Unreliable research - Trouble at the lab.
The article presents to a number of studies that point out that the process of peer review is badly flawed (not really a surprise!!) but it points to an actual viable solution. 

The story references John Bohannon, a biologist at Harvard, recently submitted a paper on cancer research 304 journals describing themselves as using peer review. What made this article different was that (a) the paper was sent under a pseudo name and (b) it was intentionally flawed with errors in study design, analysis and interpretation of results. What was problematic was that 157 of the journals accepted it for publication.

Consistent with that observation, Fiona Godlee, editor (1998) of British Medical Journal, did a similar study by sending a similarly flawed article to more than 200 of the BMJ’s regular reviewers. On average, only 25 percent of flaws were reported.  No one picked up 100% and some reported none.

BMJ then did a follow-up study in which they told reviewers it was a test; despite this performance did not improve.  And a subsequent study from UCSF indicated that over time reviewer performance did not get better, it got worse.

This is pretty gruesome.

When credible journals like the Economist get on a subject, you may want to dispute some or all of their facts and ideas, but the reality is that their international impact on public and government and funding organization opinions is substantial.  Articles like this can make 1% or 5% or more impact on research funding, without causing a single eye to be bat.  In a time of economic fragility, saving a billion or two from research budget can seem like an easy way to deal with budget crunch.  We can ignore it, or slur it, but we do this at our collective peril.

Wearing my “Gee, what a surprise” hat, let me say that all this supports my argument that institutions need to clean up their act considerably.  More importantly it reinforces my argument that traditional published knowledge is no more reliable than information well presented by a trusted person in a standardized structure, but submitted electronically through instruments including blogs.  

But wearing my other “let’s get on and fix this” had, it seems to me there is a viable solution.  Medical and non-medical testing and calibration laboratories can be improved through quality assessment.  In our own recent examination, 75 percent of laboratories that were correcting proficiency testing error found system errors that impacted on their routine testing.  [See: http://www.medicallaboratoryquality.com/2013/08/proficiency-testing-does-improve-quality.html ]

I propose the following:  all credible journals need to develop a quality assessment strategy for all manuscript reviewers. 
1.    Reviewers are selected by virtual of their knowledge and skills in particular disciplines.
2.    Selected reviewers are retained for a finite period of time.
3.    Selected reviewers are expected to participate in proficiency testing of their reviewing skills on a regular basis, arbitrarily twice a year. 
4.    Reviewers who are found proficient, can continue on with reviewing.
5.    Reviewers who are found to have opportunities for improvement, either demonstrate their improvement, or move on.

It seems to me this is a viable solution, and is consistent with a variety of proficiency challenges that we do on a regular basis.  Acquiring manuscripts that are deemed flawed or not flawed would not take a long time.  Editing for freshness would not be particularly challenging.  

The review process would improve.  The quality of published literature would improve.  The confidence in published literature would improvement.

No comments:

Post a Comment

Comments, thoughts...