FYI.

This story is over 5 years old.

Tech

Facebook’s New Algorithm Combs Posts to Identify Potentially Suicidal Users

The site needs to strike a delicate balance between privacy and safety, but experts say it’s the right move.

In January, a 14-year old Miami girl broadcast her suicide on Facebook Live, to the horror of her friends and family. Just three weeks earlier, a 12-year-old in Georgia did the same thing on a site called Live.me, but the video began circulating on Facebook. And there have been multiple suicides that were predicated by a goodbye post on Facebook.

While Facebook has long had protocols in place to identify and reach out to potentially suicidal users, it recently upped the ante. This week, Facebook announced beefed up suicide prevention tools that use algorithms to scan posts and look for potentially suicidal language—posts about sadness, pain, and not being able to take it any more, for example—as well as take note of comments on posts that may signal an issue—things like "are you okay?" and "I'm here for you."

Advertisement

Those posts are flagged and both the user, and his or her friends, are offered a resources page, with options to message crisis hotlines over Facebook Messenger, and tips for reaching out to friends. It's also integrated these tools into Facebook Live, specifically, for the first time.

An example of what a user might see if his or her content is flagged. Image: Facebook

Facebook has been working with suicide prevention groups to try to nail this outreach, but it also has to strike a delicate balance between privacy and safety. This new protocol has made it clear that Facebook has the ability to monitor our posts for specific language and flag that content. It's not a huge leap to imagine this technology being repurposed for things like monitoring protest organization or criminal behavior, or creating more targeted advertising.

"Can machine learning, artificial intelligence, and other prediction technologies be used to do creepy things? Of course," danah boyd, a social media researcher, told me via email. "Because of the backlash against these technologies and their creepy potential, we often chastise companies when they try to do detection to help people, assuming that their efforts are nefarious. Personally, I commend Facebook for trying to figure out a pathway through this."

Boyd is the founder of Data & Society, a research organization that studies the intersection of society and social media. She's also the author of a book about how teens use social media, and is on the board of the Crisis Text Line, a suicide prevent texting hotline that has helped Facebook work on these protocols.

Boyd told me that while there's room for criticism, Facebook is mostly moving in the right direction with this effort. Privacy has to be a consideration, but sometimes safety must come first, boyd said.

"Striking a balance is hard, but that shouldn't stop us from trying to figure out the right path forward," boyd said. "This work is important. To truly help people who are in need, we must start by identifying signals of people who are struggling. One of the most heartbreaking realities of my work with teenagers is that many of them turned to the internet crying out for help and no one was paying attention. They didn't know how to reach out to services, their parents were often part of the problem, and they felt lost and alone."

Subscribe to pluspluspodcast, Motherboard's new show about the people and machines that are building our future.