FYI.

This story is over 5 years old.

Tech

‘From the Mouths of Babes': Who’s Responsible for What Bots Say Online?

It’s the robots’ internet and we’re just posting on it.
​Image: Shutterstock

The internet is full of bots. Sometimes they jabber with one another, sometimes they flirt with us like we always wished the girl next door would, and when they're gone—causing follower accounts to plummet—some people get very, very upset. But who is accountable for what these bots say on the internet? It's a question without a clear answer, but with real consequences for bot makers.

Last week, Jeffry van der Goot, the Dutch owner of a Twitter bot that randomly posted sentences generated from his corpus of past tweets, was h​auled in for questioning by police after the bot tweeted, "I seriously want to kill people." Police requested that the bot's account be taken down, and van der Goot obliged.

Advertisement

Following the incident, van der Goot seemed shaken. "Being interviewed by detectives is incredibly stressful, terrifying and intimidating," he tw​eeted.

The incident raises some serious questions about who, exactly, is responsible for the legions of bots currently generating the majority of the web's traffic when they say something threatening or offensive? While the bot was ostensibly operated by van der Goot, it was built for him by bot d​esigner Wxcafe. Who's to blame, if anyone is at all, for a bot saying something concerning?

The issue will become more important as bots continue to proliferate across social media, and they almost certainly will. The situation is already so dire that a tool to determine whether a Twitter user is a bot or not exists. Aca​demics are predicting that the kind of machine-machine interaction already constituting the norm online will only grow in frequency. In other words, it's the robots' internet and we're just posting on it.​

"Bots are always going to be a liability to their creators at a certain point"​

But bots have creators, and numerous decisions are made during the design process that may indicate their eventual purpose. According to David Fraser, a lawyer specializing in online issues at Canadi​an law firm McInnes Cooper, the legal responsibilities of bot makers is uncharted ground. Even so, it's likely that the people who design bots are ultimately legally responsible for their behaviour—as long as they intend to do harm.

Advertisement

"Under Canadian criminal law, there has to be an intent to commit a crime," Fraser said. "In some cases, there's an act of intent that says you intended to do this thing. But in other cases, and in certain other offenses, it could just be that you're reckless in allowing something to happen. I could imagine that one could end up being charged with—but harder to convict with—doing mischief with something like that."

"The challenge and question with something like a bot is going to be what did the actual person intend to have happen, and could they or should they have imagined the trouble that could result, and were they reckless or willfully blind to the consequences?" Fraser continued.​

Bot designers often do create restrictions on what their bots can say, according to Darius Kazemi, a prolific programmer that created a bot to buy random crap on Amazon. Whether or not a designer did their due diligence in creating a bot that wouldn't threaten people or spew hate speech could be at issue when it comes to legal responses to bad bot behaviour.

"I try to create bots that minimize speech that could be considered illegal," Kazemi said. "I have a universal list of what could be considered hate speech that my bots just don't say. I do design work to make sure that when I pull images from places, that I'm pulling images from relatively safe sources, like the first page of results for Google image search, where I can mostly count on Google to filter out something illegal."

Advertisement

"I think bots are always going to be a liability for their creators at a certain point," Kazemi continued. "With every bot it's almost like you're signing up for a customer service job. It doesn't even have to be that popular to be an issue."​

In an interview with Kash​mir Hill at Fusion, van der Goot said that a black list for offensive words might be possible to build into a bot, but would be functionally useless. But, as Kazemi indicated, this isn't quite true, and black lists are often used by programmers to develop closed bots with strict limits. There's even a site to help bot designers build restrictions called B​ot Innocence with the explicit goal of "helping bots not be jerks."

While van der Goot is correct that it would be impossible to prevent all kinds of inflammatory speech from being uttered by a bot, there are ways to mitigate your chances of creating a digital monster. And, legally speaking, that's what matters.

"I can certainly imagine a situation in Canada where the police do the exact same thing"

Actions against lawbreaking bots seem to occupy a weird kind of legal nether zone predicated on the assumption that nobody really knows who the hell is to blame for their behaviour. While there is no legal precedent in Canada for this kind of thing, Fraser said, Dutch police did request that van der Goot's bot be taken down; in Switzerland, police seized ecstasy pills procured by a darknet shopper bot, although the artists who created it were let off the hook. Regardless of their culpability, it's unlikely that a bot creator would face legal sanctions over their creation saying something mean.

"I can certainly imagine a situation in Canada where the police do the exact same thing [as in van der Goot's case]," Fraser said. "Now, whether they could then obtain a court order requiring a person to take down a bot, I really doubt that."

But programs aren't the only entities slandering, threatening, and irritating each other online. The internet is full of human trolls, too. As the Gamergate saga taught us, social media is a decidedly unsafe place for certain kinds of people. Often, it's women who face harassment of every vile sort each day. Tools have even popped up to help people manage the onslaught of abuse from thinking, feeling, people full of malicious intent—qualities absent from a bot, no matter how aggressive. If police already have trouble cracking down on human trolls who've decided to say horrible things, who's going to spend their time going after abusive robots?

"Bots can get away with saying things, socially speaking—I'm not sure about police—that a person can't," said Kazemi. "It's a 'from the mouths of babes' kind of thing. A four-year-old might say something awful, but it's considered hilarious. Right now, bots are often read like four-year-olds."

Thus, the question of who is legally responsible for all the chatty bots online is without a clear answer. The framework is there, in Canada at least, to prosecute someone for creating a bot meant to harass or for not building proper restrictions into the design. But whether or not someone will actually face jail time or other formal legal sanctions is unclear. As bots explode across the web at a staggering pace, however, it won't be too long before we find out.​