Five years on, Retraction Watch continues to push harder for transparency in scientific publishing.
Image: Ahuli Labutin/Shutterstock
In 2010, anaesthesiologist Scott Reuben, a researcher from Tufts University whose work on painkillers had influenced others in his field, pleaded guilty to fraud. He'd fabricated data in 21 of his studies.
It was a particularly brazen case of healthcare fraud that resulted in papers being retracted across several journals, and it was one of the driving forces behind the creation of Retraction Watch, a blog run by journalists Ivan Oransky and Adam Marcus that sets out to "track retractions as a window into the scientific process."
The Reuben story, Oransky said in a phone call, made one thing clear: "There's almost always a really good story behind a retraction."
Retraction Watch celebrated its fifth year online this month, and Marcus and Oransky told me how it went from an idea to write occasional posts to a blog with over 10,000 subscribers, a growing staff including new editor Alison McCook, and multiple stories a day.
The premise of Retraction Watch might sound dry: It is, after all, essentially based on the small text in scientific journals that have a niche readership to start with.
But there's usually a juicy story to be found by digging into the reason behind a retraction. Often (but not always) a retraction is a result of misconduct or even fraud. Researchers that have made Oransky and Marcus's blog include those that have manipulated data, faked peer reviews, or neglected to reveal conflicts of interest.
"If science is going to proclaim that it's self-correcting, then it should be held to a particular standard."
The fallout can be major. Long before Retraction Watch there was the fraudulent article by Andrew Wakefield that suggested a link between the MMR vaccine and autism. Published in the Lancet in 1998, it wasn't fully retracted until 2010, by which time the discredited results had fuelled the anti-vaxxer movement. More recent infamous examples include two papers published in Nature in January 2014 that claimed to demonstrate an easy way to make stem cells, and a Science study featured on the popular podcast This American Life that claimed gay canvassers could change people's minds about same-sex marriage.
Oransky and Marcus make it their mission to investigate the stories behind retractions like these, which are not always easy to find. "There's a pretty disturbing lack of transparency in the [retraction] notices themselves," said Oransky. "It's often very difficult to tell why a paper was retracted; that's a major reason why we started the blog."
"If science is going to proclaim that it's self-correcting, then it should be held to a particular standard," he continued. From the team's experience with too-often opaque retraction notices, that notion of self-correction isn't there yet.
Finding the real story can sometimes be tough; journals are hesitant to provide information as they see retractions as a black mark against them, and authors are rarely begging to come clean. From the early days, Retraction Watch found that its readers and commenters were helpful at filling in some of the gaps, and they still rely on tip-offs to stories they might not have seen themselves.
It's a special occasion when they can get real answers from the actual players involved, but Marcus said his favourite stories are when a researcher actually takes the time to explain what has gone wrong following an honest mistake.
In their books, these researchers fall in the category of "doing the right thing." "I think this really ultimately gets at why we're doing what we're doing," said Marcus. "It's not to point out the ridiculousness of people who screw up, it's actually to show how science works and can work when it's working really well."
Oransky highlighted a similar "honest" case in which researchers had been pushing on with experiments that seemed to be giving the right results, only to apologetically retract the findings after realising that their lab had mistakenly ordered the wrong mice from the start.
"Which is not to say, by the way, that we don't enjoy a good scandal," Marcus admitted. Retraction Watch is still happy to name and shame repeat offenders, and recently published a leaderboard of the scientists who have collected the most retractions. It's currently topped by anesthesiologist Yoshitaka Fujii with an impressive 183 papers retracted after claims he had fabricated data.
A 2011 Nature study found retractions had increased tenfold over the preceding decade.
Which all begs the question: How does this kind of thing happen in the first place? The problems start long before a retraction notice is published. One systemic issue is peer review, the utility of which Oransky and Marcus questioned. While it might work to catch some errors, they pointed out that in cases of fraud or misconduct, it's not fair to expect reviewers to weed out falsities—if someone wants to just make up numbers, how will they know?
"We are big champions of what's called post-publication peer review," said Oransky. This encourages people to continue discussing the contents of papers after they appear in journals, something that's encouraged by online networks like PubPeer or ResearchGate.
And it's not just about weeding out lies, but also helping check that results hold firm when experiments are reproduced. That's a huge problem: the journalists pointed to a study published this year in PLOS Biology that claims more than half of preclinical studies are not reproducible, representing a cost of $28 billion a year.
These problems persist further when retraction notices are unclear and dodgy papers remain out there. After all, journals have little incentive to retract or to make their retractions clear. "A lot of journals live and die by something called the impact factor, which is basically a measure of how often papers are cited," Oransky explained. "And if a paper gets retracted it obviously shouldn't get cited any more, or at least shouldn't get cited in support of an idea."
But it happens: studies have shown researchers continually cite retracted papers either without realising or without noting the retraction.
To help address this problem of transparency, Retraction Watch is now building a kind of one-stop shop for retractions. They received a $400,000 grant from the MacArthur Foundation in December to create "a comprehensive and freely available database of retractions." The Laura and John Arnold Foundation also just awarded Retraction Watch's parent organisation, the Center for Scientific Integrity, a $300,000 grant that will also be partly used to support the database.
This, they explained, will aim to help scientists easily find information about retracted or corrected papers, and stop misinformation from propagating in their research.
Since Retraction Watch began, the number of papers retracted from journals has gone up. A 2011 Nature study found retractions had increased tenfold over the preceding decade, a much greater increase than the number of papers published. It's hard to tell whether that's down to more mistakes or better policing.
Oranksy suggested that more attention was now being paid to the issue. "I just think this is much more a part of the conversation than it ever was," he said.
He wouldn't be drawn to speak on the potential role of Retraction Watch itself in encouraging greater transparency—"but if we've played some part in that, we're extremely gratified by that," he said.