Can empathy ever be automated?
An example of a card from the Empathy Deck bot. Image: Erica Scourti
A bot might not be capable of feeling emotion, but perhaps it can pretend well enough to offer some comfort.
Artist Erica Scourti's "bot with feelings" responds to its Twitter followers with a custom-made card designed to offer an empathetic reaction. The bot, called Empathy Deck, pulls snippets of text from the artist's own diary as well as a variety of sources under the vague umbrella of "self-help" to chime in when it detects emotional content in its followers' tweets.
"It's kind of empathising on one hand, but there's also a sense of attempting to give a bit of advice, maybe like a slightly annoying friend who's trying to suggest things rather than just listening," Scourti told me in a phone call.
The result is a mish-mash of very personal, human experience provided by Scourti's memoirs and the kind of platitudes shared in motivational memes. Scourti explained that the "cards" the Twitter bot creates, which combine the text with images, are inspired by different kinds of self-help cards, from tarot to "wisdom cards" and even Brian Eno's "Oblique Strategies," a set of cards designed to inspire creativity.
Commissioned by the Wellcome Collection for its exhibition on mental health, Bedlam: The Asylum and Beyond, the bot is intended to explore the potential role of automation in the field of therapy and caring.
"If caring labour or mental health caring could be replaced by automation, then would it be? And what would that say about being able to care about others, as humans?" Scourti said.
Emotions are often held as a defining feature of the human condition, but we're starting to see various attempts to create an artificial emotional intelligence as bots move beyond simple novelties or customer service scripts. The company behind Apple's Siri is currently trying to create customer service bots that recognise when you're getting aggro, while virtual therapists can pick up on facial expressions and tone of voice to guide their responses. Researchers have even developed an algorithm to spot depression based on the Instagram filters people use.
But the Empathy Deck bot reveals the limitations of an algorithm to generate a really meaningful response; clearly, the bot's attempts at "empathy" are nowhere near a human's, and it often barely makes sense. For Scourti, that's kind of the point.
"In a sense it's also about the failure of automated empathy, because it's not getting it right—it's a bot, at the end of the day," she said.
But there's only so wrong Scourti will let the bot get things. Though its output is automated, she stressed the importance of taking responsibility for what it produces, especially as the bot is dealing with people's feelings.
Read More: How to Make a Bot That Isn't Racist
Unlike bots left to their own devices, such as Microsoft's chatbot that ended up tweeting about white supremacy, Empathy Deck has a lot of restrictions built into it.
It will only interact with people who follow it and who it follows, and it filters people it follows for spam and offensive language.
"We've also restricted things that it can say, just in case something goes wrong," said Scourti. "I know I haven't written anything offensive in my diary text, but you don't know how things are going to get splurged together or what it's found in the secondary texts."
Interestingly, Scourti said the bot also won't respond to words that express "genuine upset," precisely because it's not actually capable of empathy. "I don't want it butting in and upsetting people; that defeats the purpose," she said.
Get six of our favorite Motherboard stories every day by signing up for our newsletter.