FYI.

This story is over 5 years old.

Tech

People Think Computer Journalists Are More Trustworthy Than Human Ones

Humans did win higher marks for writing stories that are well-written, interesting, and coherent.
Robots take more jobs. Image: Matthew Simantov/Flickr

When Bill Gates says that software is going to take our jobs, not even journalists are immune. As traditional ad revenue continues its decline, publishers are under more and more pressure to build traffic for less money, which often means endless aggregation—paying writers little to no money to write quick summaries of other articles with unique headlines that, in bulk, can boost traffic on the cheap.

Whether your think aggregation is useful or not, what is clear is that it's going to be automated, and the change is already happening. In the sports world, where game recaps often boil down to a narrative stat sheet, computerized reporting is growing in prevalence, as well as in other regular stat-based reporting, like earthquake coverage. And last year, Yahoo paid $30 million for Summly, an app that—in short—automatically summarized salient points in text. At this year's CES, Yahoo announced News Digest, its first app built off of Summly's technology, which features stories that are "algorithmically produced, but editorially curated,” according to Marissa Mayer.

Advertisement

But can a computer really be as trustworthy as a bonafide human journalist? A new study in the peer-reviewed journal Journalism Practice had test subjects compare articles written by software and humans, and the results are surprising: While humans won higher marks for writing stories that are well-written, interesting, and coherent, the software articles were ranked as being more objective, more informative, and more trustworthy.

The study by Christer Clerwall of Karlstad University in Sweden is a fascinating read about the role of technology in shifting the way journalism is done. (And, likely because he's a human, it's well-written and easy to understand.) One of its key points is that it's really hard to evaluate the quality of journalism based on a single metric:

Credibility seems to be at the heart of the assessment of news stories. However, there are other criteria pertaining to the evaluation of the overall quality of a news story. …

The medium and/or channel as well as the source/sender is/are important in the users’ perception of the credibility of a message. However, when a message is stripped of these credibility “clues”, users will have to draw conclusions about the message based on how they perceive the quality of the message: this is affected by the presentation, plausibility, and the specificity of the message.

Clerwall's study compared automated sports recaps to those written by journalists; the largest of a trio of surveys involved 46 students, 19 given a human recap of a Chargers football game, and 27 a software-generated recap produced by Statsheet software. They were asked to rank each report based on a dozen attributes that can be used to define the quality of a piece of reporting, and the computer report won out in a lot of key categories:

Advertisement

Now, there are some clear caveats to the study, which Clerwall notes. First, the sample sizes are small. (Clerwall writes that it's a pilot.) Its limitation to sports game recaps is the main concern, as they're largely expected to be formulaic, which means that a human adding a bit of flair might be a negative for readers looking for straight numbers. Asking a computer to sort out Malaysia Airlines Flight 370 is a much taller task, and one that still requires a human journalist's nose for bullshit.

But that's the beside the point. Software will only get better, and as it does, it'll continue to encroach on the role of journalists as aggregators and repackagers of available information. (The day that computers get into actual reporting is the day the robots fully win.) And as it becomes even more widespread, reporting software will continue to highlight the essential tension of journalism today: What's the role of a reporter?

Above all, journalism should be about presenting the truth (which isn't always as cut and dry as anyone would like). But how that's presented is where the wiggle room lies. On one hand, you have outlets like the Associated Press, which tends to have a very straightforward style that's perhaps a bit dry but reads as being highly objective. On the other extreme end are articles focused on pulling out a single story and hyping it to maximum effect, which makes for far more clickable headlines.

If people don't read your stories, you can't pay your rent; if you're only producing misleading garbage, readers will know to stay away. At the same time, a deeply reported story may easily get less traffic than a reblog of another report with a super juicy headline.

Most publishers these days find themselves trying to strike a balance between the two, ensuring that readers come back with blogging, while investing in more resource-intensive reporting to help build a reputation, support the journalism ecosystem, and let reporters write the types of laudable stories that aren't soul-crushing.

Major publishers will continue to adopt automating reporting software, as it's simply a cheaper method for pushing out cheap content in bulk. But what happens if automated aggregators are actually perceived as being more trustworthy? A whole lot of content farms will become automated, for one. It also will put more demand on the presentation and editing of stories, as delivering 1000 software-aggregated apps a day is too much noise for readers.

Then again, if reporters can be automated, so can editors. Software already exists to hone headlines by presenting two different ones to readers and automatically selecting the winner after a set test period. Imagine that across an automated content farm, in which news delivery is honed by popularity and individual tastes. Even with a somewhat clunky algorithm, the sheer cost-effectiveness of such a model would be attractive, assuming that readers could actually trust what it produced—and it appears more and more possible that they could.