FYI.

This story is over 5 years old.

Tech

Scientists Are Worried About 'Peer Review by Algorithm’

A researcher scanned 50,000 papers for statistical errors using an automated system.
Image: Shutterstock

On August 23, a Dutch statistician shook up the usually slow world of scientific publishing by giving it a glimpse of an automated open data future.

Chris Hartgerink, a researcher at Tilburg University's Meta-Research Group, used a program called Statcheck to scan over 50,000 published scientific papers for statistical errors and posted the results to the science discussion board PubPeer.

By making the data openly available and inviting individual scientists to see how their own work measures up he's prompting a public conversation about peer review—the traditional process where scientists anonymously review each other's work before publication—and its ability to catch basic math errors, sloppy work, or possible fraud.

Advertisement

"We're checking how reliable is the actual science being presented by science," said Hartgerink.

It's too early to tell whether any serious cases of error were identified by the data scraping

One of Hartgerink's colleagues created the program last year and used it to show that about 13 percent of published results in eight psychology journals contained statistical errors that changed their conclusions, something that could help explain why scientists say they struggle to reproduce published results.

What Hartgerink did was to up the scale and turn the program loose in the archives of several major commercial scientific publishers, automatically scraping and identifying over 688,000 individual statistical results and recalculating them to confirm their accuracy. The thousands of psychologists and social scientists who wrote the original papers then received an email from PubPeer informing them that a sort of personalized report card on their work had been posted online.

It's too early to tell whether any serious cases of error were identified by the data scraping, but scientists seem to be taking the unsolicited public audit pretty well so far. Some whose papers received a clean bill of health posted the reports to Twitter, including Jennifer Tackett, a professor of psychology at Northwestern University.

@PubPeer @NatureNews @Neuro_Skeptic @RetractionWatch I got mine tonight! Can I frame it? pic.twitter.com/mBVa3NSVxk
— Jennifer Tackett (@JnfrLTackett) August 26, 2016

Advertisement

She said she welcomes any additional scrutiny if it "sheds light on the magnitude of statistical reporting errors in our field." She added that the public forum didn't bother her, since the posts were simply a re-interpretation of reported results.

"All of our work has errors—we do what we can to minimize them, but I am under no illusions that my work is error-free," she said. "All of the data came from published work, which none of us 'own.' Anything I've published is out there for people to analyze in whatever way they choose."

Others, however, are apprehensive about where automated checks could be going. "What's next? Peer-review by algorithm?" asked one.

@jeffvallance Never seen 'statcheck'. Feels like plagiarism software. I hate it! What's next? Peer-review by algorithm? Robot authors?
— Scott LaJoie (@aslajoie) August 28, 2016

Dorothy Bishop, a professor of developmental neuropsychiatry at Oxford University, said she is generally a strong supporter of work that could improve reproducibility in science by promoting more accurate data reporting, but that using a mass email bombardment to inform scientists their paper had been posted directly to PubPeer—a community known for identifying and investigating scientific fraud—even if no inconsistent data was found was likely to turn many scientists off.

"My personal view is that the focus should be on errors that do change the conclusions of the paper," she said. "I think at least a sample of these should be hand-checked so we have some idea of the error rate," she said in an email.

She explained that without knowing exactly how accurate the program is it's unclear how many results are valid, and the program puts the onus on scientists to then go take the time to check the results over and respond publicly.

Read More: Do Science Journals Need an Alternative to Peer Review?

Whether or not researchers are comfortable with an automated scientific sentry system trawling their past and present data for errors, this sort of public data dump may become the new normal for scientific publishing. The Meta-Research group and others who develop these systems release their code open source, allowing anyone to access the tools.

"There are thousands of papers being published weekly," Hartgerink said. "[thanks to the programs] there's nothing stopping us from doing this every week."