The VICE Channels

    Only 20 Percent of the Top Tweets During the Boston Bombing Were True

    Written by

    Lex Berko


    Image via Wikipedia.

    You can praise Twitter for being fast, but you can't really praise it for being accurate.

    During the Boston marathon bombings and the subsequent manhunt for the Tsarnaev brothers, Twitter was a major source of misinformation. People were so caught up in the urgency of the moment, wanting in equal parts to commiserate and freak out, that the internet’s usual skepticism vanished and, in its place, users began retweeting every piece of remotely truthful information ad nauseam.

    Later, we discovered how distorted our Twitter-borne perceptions were. Social media, and some of its most prominent users, received a pretty severe censure in the press for propagating false information.

    But what are the precise numbers behind what happened on social media over the course of those days? And how can we stop potentially harmful tweets from proliferating in the future? To answer these questions, a triad of researchers in India have quantified and analyzed the internet’s collective response to the bombings.

    Using data from Twitter’s steaming API, their report details how we use and misuse social media during times of crisis, and offers suggestions on how to monitor false information as it courses its way through the web in times of unrest.

    From the approximately 7.9 million unique Boston tweets collected from 3.7 million unique users, the researchers looked at the top 20 most popular. Of those 20, a little over half consisted of unremarkable commentary (#PrayForBoston) and personal opinions (Our thoughts go out…).

    Of the remaining, verifiable tweets, 20 percent were true, while 29 percent contained fake information or rumors, like "R.I.P. to the 8 year-old boy who died in Boston’s explosions, while running for the Sandy Hook kids. #prayforboston."

    No matter how you might feel about "new media" you probably want it to be true more often than its false. All the same there's nothing too shocking in these results; as the saying goes, "a rumor is halfway around the world while the truth is buried under cries of 'FALSE FLAG.'"

    But what is somewhat surprising is that when the researchers looked at the user profiles of those propagating false tweets, they discovered that high numbers of verified accounts were responsible for retweeting misinformation.

    “The high number of verified and large follower base users propagating the fake information can be considered as the reason for the fake tweets becoming so viral,” they note. “It becomes difficult for the users to differentiate which sources to trust and which not.”

    True enough, but also a big reason that verified accounts drive misinformation is that they're the ones driving a lot of Twitter traffic. Some dude in Worcester, Mass and a Kardashian sister might both know equal amounts of nothing, but Worcester dude's sentiment, assumptions or just misinformation hits the Twitter pond and makes smaller ripples than the verified reality show star's.

    The findings also depict how trolls menace Twitter in the hours and days after a tragedy. There were 31,919 new accounts created that also tweeted about the Boston bombings between April 15 and April 20 of this year. Two months later, 19 percent of those accounts were suspended or deleted by Twitter for bad behavior. Many of those accounts traversed the typical route of a spammer, tweeting the same content repeatedly and capitalizing on people’s confusion through imitative fake profiles.

    Halting potential harmful social media activity during an emergency is an enormous task. As we’ve seen, heaps of brain spurts, factoids, and other forms of communication flood the internet every second. Recognizing this, the researchers wrote, “any algorithms or solutions built to detect rumors on [online social media] should be scalable enough to process content and user data up to the order of millions and billions.” And it needs to do so in real time.

    In terms of the actual functioning of such a powerful algorithm or solution, the researchers showed that while it’s not all that easy to identify which tweets are real or fake, it is possible, to some extent, to predict the viral potential of a piece of content. By examining certain user attributes, including social reputation, global engagement, and likability, the researchers wrote an equation that measured the impact of a user. Then, using linear regression, they were able to anticipate the future popularity of a tweet.

    It will be necessary to verify these results against similar events, but these analyses make us more aware of how a sense of urgency during crisis can lead to social media abuse. Going forward, it may even help us limit such activity.

    (H/T Smithsonian)