Friday, October 04, 2013

Science Magazine issue on science communication: quality control for manuscripts

Quantification of the flawed manuscript submission study. On
the left, categories of target journals. Green represents the
DOAJ list, which is suggested in the article not to vet journals  
sufficiently(see text and link). Beige represents journals on a 
blacklist maintained by Jeffrey Beale. Of the target journals,
very few of the journals which rejected the paper came from 
Beale's list, while many of the DOAJ journals did
 recommend rejection. 
This week's Science has several articles about improving communication of scientific results.  Scientific journals are in the process of a revolution driven by the world-wide web, and there are many new entrants into the field. Since scientists are under pressure to constantly publish, the dramatic increase in titles creates a possible risk to publication standards.

 An article by John Bonahan in the Science special issue focuses on quality control in open-access journals. To test how well the new journals were doing in handling manuscripts, Bonahan deliberately submitted a highly  flawed manuscript  to 304 open-access journals, and was flabbergasted to find the manuscript was accepted 157 times. Of the accepting journals, a disproportionate number came from a name and shame list maintained by Jeffrey Beall of journals which he believes are acting unprofessionally (this category would also include hidden billings and unclear copyright behaviors). However, many others which appear in the Directory of Open-Access Journals (DOAJ) also accepted the paper.
Remarkably, Beall found that four of these journals went ahead and  published the faulty manuscript even after it had been withdrawn.
This is remarkable and indicates a system of scientific discourse in distress. It is important to point out that print journals, which are not covered in this study, are also fallible, and in particular have had issues with more subtle problems like data fabrication. In fact, the blog Retraction Watch contacted Bonahan regarding his faulty manuscript study, and he said print journals were not targeted because their turnaround for manuscripts was too slow to get good numbers for his study! I think the manuscript study is thus best taken as evidence of systemic stress rather than flagging a particular publishing model as especially flawed.

Update: the Chronicle of Higher Education is more critical of the study design. I still think what Bonahan found is pretty appalling even within the limitations.

No comments: