In the weeks after Hurricane Katrina devastated New Orleans in 2005, the Federal Emergency Management Agency (FEMA) came under intense scrutiny for its bungled relief efforts that left thousands of residents trapped in the city without access to the basic necessities FEMA had been tasked with delivering. The explanations and excuses for FEMA’s inability to provide proper relief to the denizens of New Orleans were manifold, yet one can hardly help but wonder if the relief effort might’ve been more effective in saving lives and mitigating discomfort had social media been available as a tool for emergency responders and the victims.
In recent years, we’ve seen social media become a powerful tool for everything from revolutionaries in Egypt to first responders in New York in the aftermath of Hurricane Sandy. Users have put Facebook, Twitter, and other platforms to use for everything from tracking down loved ones to distributing relief supplies, but when Katrina struck Facebook was just over a year old and Twitter wouldn’t appear on the scene for another seven months. Would they have made a difference for the victims of Katrina? It’s tough to say with absolute certainty, but according to a study published Friday in Science Advances, the answer is likely yes.
In the aftermath of a national disaster, FEMA does complex modeling which considers everything from geography to infrastructure to the characteristics of the disaster itself in order to determine where the most severe damage was likely to have occurred. This allows the organization to distribute supplies in a timely manner to people in the regions most affected by the event—at least, hypothetically. As the United States saw in the aftermath of Hurricane Sandy in 2012 and even more so in the wake of Katrina, inaccurate damage mapping can add on weeks, if not months, to the amount of time it takes relief supplies to reach those most in need. To save more lives, relief workers need better maps.
According to the team of researchers led by Yury Kryvasheyeu from Australia’s National Information and Communications Technology Research Centre of Excellence, one of the most accurate ways to map damage in the aftermath of a disaster is to map the tweets about the event. In fact, using tweets to predict damage is liable to give results that were slightly more accurate than the complex data modeling used by FEMA.
To arrive at this conclusion, the team examined all the tweets between October 15 and November 12, 2012 that referenced Hurricane Sandy by looking for keywords such as “hurricane,” “Sandy,” “Frankenstorm,” and “flooding.” Although some of these tweets already had associated map coordinates, others did not so the team analyzed user accounts to determine the location of the tweets. After everything was said and done, the team had a data set compiled of nearly 10 million tweets from more than 2 million user accounts.
As the team found, those that were closest to the storm and affected the most by its fallout were more likely to be talking about the event. Yet in order to account for extraneous variables that might skew the data (media reports might stoke irrational or inflated fear in people that were not severely affected by the storm, for instance), the team compared their tweet maps with data about Hurricane Sandy damage that had been collected by FEMA and the state governments in New York and New Jersey. When the team compared these two data sets, they found that Twitter was actually slightly better at predicting the location and severity of the damage than FEMA’s own models.
These results are encouraging for the future of social media as a tool to mitigate the fallout from natural and manmade disasters, but before FEMA and other disaster relief organizations abandon their own models in favor of social media modeling more research needs to be done to account for variables which might skew the data such as Twitter bots and those who aren’t using social media. Moreover, if similar studies are done with Facebook, which has a larger user base, the results may actually become more precise and thus more useful in facilitating disaster relief.
“The correlation that we observed is not uniformly definitive in its strength for all events, and care should be taken in the attempt to devise practical applications,” the team wrote. “However, we believe that the method can be fine-tuned and strengthened by combination with traditional approaches. Our results suggest that, during a disaster, officials should pay attention to normalized activity levels, rates of original content creation, and rates of content rebroadcast to identify the hardest hit areas in real time.”