Every now and then somebody finds grant money to test the efficacy of peer review in making sure the scientific literature is fairly judged, and scrubbed reasonably free of egregious errors. The results are often hysterical, in a dark and disturbing way.
Today's example comes from psychology (OK, sort of a science), and concerns the review of previously accepted articles, resubmitted to exactly the same journals they were published in, with the identifying information (author and institution) replaced to slightly cover the fact that the manuscripts were not fresh. So what happened?
AbstractI guess I'm not terribly surprised that neither the editors or the reviewers failed to detect the ruse. Editors probably don't do much but read the abstract, and pass it off to a assistant editor; this may even be a function that secretaries handle in many cases. There's an excellent chance it would go to a different assistant editor, who would therefore not be immediately acquainted with it (although, you might hope he would at least scan the published contents of his own journal). Then he would likely send them out to several referees, hoping two or three might accept it to review, again, it would be unluck of the draw to get the same reviewer twice. And some of them might even think they were re-reviewing the original article! A catch rate of 8% seems low, but possible, just on that basis.
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
As for the failure of the already accepted and published articles to be accepted, there are three obvious mechanisms, not mutually exclusive, and neither is particularly appealing.
The first is that most papers contain "serious methodological flaws", but a few draw the lucky straw in the review process, are not detected, and go on to publication. In a second "independent" review, they simply draw the majority long straw and are found out and rejected. Not a pretty sight, but at least not a product of bias, known or unknown. IIRC, these sorts of patterns have been observed with tests of proposals submitted to agencies (the same proposal sent to multiple groups of reviewer), with these sorts of results. In a system where more papers and proposals are rejected than accepted, it's easy to see this as simply a statistical noise issue.
Another is that reviewers are heavily biased in favor of people they already know of in the field, and from institutions they believe respectable (the original submissions). The reviewer then would give a paper from an unfamiliar author and an unfamiliar institution much more detailed scrutiny, finding the "serious methodological flaws" that previously passed in the less scrutinized original article.
The other possibility is that reviewers and editors are conscious gatekeepers of the "purity" of their literature, only allowing favored authors and institutions to publish in their journals. An especially egregious example of this sort of behavior came to light in the "Climategate" scandal, when e-mails between scientist conspiring to keep other papers out of the literature were revealed in their released e-mails. It wouldn't have to be that conscious or well planned.
What were they thinkin' when they funded this study!
Hit tip to Watts Up With That (again).