FYI.

This story is over 5 years old.

Tech

Scientists Are Still Using Facebook to Study Human Behavior

Public outcry isn't enough to get researchers to lay off massive datasets about human behavior that are ripe for the picking.
Image: Shutterstock

If there was any doubt that large-scale experimentation on social network users would continue after the fallout of Facebook's large emotion study published last month, rest assured: It will. Today, Proceedings of the National Academy of Sciences published another study about Facebook user behavior.

This shouldn't be any surprise—Facebook users offer a nice window into psychological processes like group think and decision-making, and it's a dataset that researchers are all too happy to pull from. It is a little surprising, however, that PNAS decided to publish a Facebook study just days after it issued an "Editorial Expression of Concern" regarding that first study.

Advertisement

In that expression of concern, the journal said that it had some qualms about how the data was collected for use in the paper—namely, that people didn't opt in and weren't allowed to opt out. Now, several days later, it has published another study that pulls its research from Facebook, with data that was collected in much the same way—without the chance to opt out or express consent.

In the latest study, researchers from Harvard, the University of Limerick in Ireland, and the University of Oxford suggest that, online, people are more likely to copy their friends rather than take suggestions about what to use. In more concrete terms, people were more likely to use a Facebook app if their friends had just decided to install it on their Facebook, rather than use Facebook's short-lived "popular" apps suggestion.

To be clear, there are some major differences between the study published today and the one published last month. Most importantly, Facebook did not manipulate any data during the course of the study. Secondly, the newest study pulls from a dataset that was generated between June and August 2007 and was used in a separate PNAS study back in 2010.

The researchers used a supercomputer to try to create a mathematical model that would closely approximate the results of that earlier study published back in 2010—no new data was collected. The 2010 study—the last time the dataset was used—relied on data pulled from Facebook and from the applications themselves, and was used to suggest the "spontaneous emergence of social influence in online systems." It covered 100 percent of potential application users (that is, everyone on Facebook), and 99 percent of all available applications.

Advertisement

Today's study is, more or less, an exercise in computation. "The data used by the research team contains no information about individuals, and only information about individual applications, so there are no implications in terms of the privacy of individual Facebook users," a press release accompanying the study said. But, nonetheless, it uses data that was collected, without the express consent of its subjects, to tell us something about human behavior.

I personally have no qualms with what was done in any of these studies—Facebook has been manipulating what you see in your newsfeed for years now, and it's no secret that Facebook applications are often backdoor data collectors. It is, for better or for worse, what we sign up for when we sign up for a free, for-profit social network.

But it does seem a bit underhanded for a journal to back away from a study that is much like dozens of other studies that has been put out there, then publish one mere days later that pulls from many of the same people who PNAS was trying to appease.

Social media is a dream come true for many psychological researchers, and one blip isn't going to change that. For proof, look no further than the conclusion of today's paper:

"Online experiments have been used successfully in computational social science, but it is challenging to run experiments in online environments that people actually use. If longitudinal data are available, as in the present case, it is possible to evaluate a model's fit based not only on long-time behavior but also on dynamical behavior," the researchers wrote. "As more observational data with high temporal resolution from online social networks becomes available, we believe that this modeling strategy… will become increasingly essential."