FYI.

This story is over 5 years old.

Tech

Do Science Journals Need an Alternative to Peer Review?

The latest controversy over a stem cell study reignites debate over the process of publishing breakthrough research.
Image: Shutterstock/Protasov AN

This January, Japanese biologist Haruko Obokata published two papers in Nature that were hailed as “groundbreaking” in the field of stem cell research. Obokata and her co-authors wrote that they had found a new way to reprogram adult mouse cells into an embryonic state just by giving them an acid bath; you may have seen it lauded in the popular press as a “miracle cure.” 

A month later, and that apparent breakthrough was looking less certain. Bloggers started pointing out similarities between the images in the Nature papers and earlier work by Obokata, along with other image problems. Additionally, people had difficulty reproducing the results.

Advertisement

It’s as yet unclear exactly what happened—there could have been an accidental mix-up with the images and it’s not entirely surprising if people can’t easily reproduce results from a complex procedure—but today the institution where the research was conducted apologised for the situation at admitted there were “serious errors” with the papers.

In an interim report on an investigation into the matter, they wrote there had been “inappropriate handling of data” concerning some of the issues raised, “but the circumstances were not judged to constitute research misconduct.” The various authors of the papers are considering retracting them.

While it’s yet to be seen whether the results of the studies are still accurate, there are clearly serious issues with the published papers. So how did they get published, in a journal as prestigious as Nature, no less?

As with most respected journals, articles published in Nature are peer-reviewed; they’re reviewed by two or three people considered to have the appropriate expertise (who almost always remain anonymous) before publishing. In the light of events like this recent controversy, the debate over the peer review process has been reignited, with some suggesting that it’s perhaps time for an alternative approach to reviewing.

One scientist took to a public forum to publish his own version of the stem cell experiment under question—which showed he was unable to replicate the results. Kenneth Lee from the University of Hong Kong published a review of the controversial paper on ResearchGate, a kind of social network for scientists, with the conclusion, “We have tried our very best to generate STAP cells using their protocol and it appears that it is not as simply and reproducible as we expected. So whether the techniques really work, still remains an open question?”

Advertisement

Lee’s was in fact the first review to appear on a new feature launched yesterday by ResearchGate called “Open Review,” which aims to encourage this kind of widespread, open discussion of scientific research. Ijad Madish, the CEO of ResearchGate, told me he thought this latest controversy in the world of scientific journals could just be the “tip of the iceberg,” and that it threw the efficacy of the peer review into doubt.

“It’s anonymous, it’s not transparent, it takes too much time, and it’s only two people reading the article without reproducing it,” he said in a phone call. He’d like to see it done differently: “You should publish the results and then there should be a network review, a public, open review of these results and datasets immediately after you created these results.”

Of course, peer reviewers can hardly be expected to sleuth out if papers have been doctored, or even if the wrong data or images have accidentally be used; they can only go on what’s in the paper. “Journal editors do not expect peer review to ferret out cleverly concealed, deliberate deceptions,” Nature explains in its peer review policy. “A peer reviewer can only evaluate what the authors chose to include in the manuscript. This contrasts with the expectation in the popular press that peer review is a process by which fraudulent data is detected before publication (although that sometimes happens).”

But other issues with peer review have been brought up in the past: there are suggestions that the process could lead to bias, and there has been large-scale controversy in the past over the lack of rigour at apparently peer-reviewed journals, as revealed by Science’s sting involving entirely fabricated articles accepted for publication last year. That’s not to mention the time and expense it adds to publishing new discoveries.

In this case, it’s quite telling that problems with the papers were first raised by the blogosphere, which represents something of a polar opposite to the closed, secretive process of traditional peer review. As we all know, anyone can say anything online—and sometimes that’s a good thing.

But if research was published for such open scrutiny, with anyone able to write a review, I asked Madish, wasn’t there a risk of scientists’ reputations being unfairly damaged by false or inaccurate claims? “Of course these things will happen but it’s transparent, and people can defend themselves,” he responded. He added that the same sort of thing can happen in the traditional journal environment anyway; at least online the discussion is in the open.

All of this suggests it’s perhaps time journals looked beyond their close circles of peer reviewers to how they can involve the wider online community of academic bloggers and scientists in the review process. Because ultimately, as we've seen in this recent episode, they’ll probably gatecrash the discussions anyway.