FYI.

This story is over 5 years old.

Tech

When Humans Bully Robots, There Will Be Consequences

Could bullying a bot make it more of a threat?
Image: charles taylor/Shutterstock

Allusions to dystopian futures with robotic overlords are pretty hard to overlook. Just last week, a tragic accident where a young worker in Germany was killed by an industrial robot spurred cries of killer bots, and scientist Stephen Hawking and tech entrepreneur Elon Musk have long warned of the dangers of a robot apocalypse.

But we rarely turn the tables and think of what would happen if we humans subjected our robotic minions to abuse. Could it be that some of our robots might actually become mechanical victimssubject to human vandalism and bullying? What would this mean for us when we introduce carebots, garbage collector bots, or assistant teacher bots into our societies at large?

Advertisement

Back in 2010, Pericle Salvini, a researcher at the Scuola Superiore Sant'Anna in Pisa, co-authored a conference paper entitled "How safe are service robots in urban environments? Bullying a robot."

In this paper, Salvini coined the phrase "robot bullysm," which refers to any kind of abusive behaviour that might prevent a robot from carrying out its task. Salvini told me that, back then, the robotics community didn't really take him seriously. But now, given the increase of autonomous cars and drones, he asserted the importance of rediscussing the subject.

"If vandalised, a moving object is much more dangerous than a fixed object like a phone box."

Salvini said that if we're game to vandalise static objects, we're probably open to targeting moving ones too. And he argued that "bullied" or vandalised robots could inadvertently pose a greater threat to humans.

"A phone box is often an object of vandalism—a robot is something that moves and can bring stuff. If vandalised, a moving object is much more dangerous than a fixed object like a phone box," he explained in an email. "Consider how anybody with a stick or spray would be able to sabotage a robot's functioning […] If it's not programmed to manage this problem it may hurt something or somebody."

"You cannot oblige passers-by to wear helmets or read user-manuals," Salvini added. "Are we sure that by introducing autonomous cars, drones, or whatever kind of autonomous robot into society, we will not introduce a new kind of accident or risk? Are we aware of what kind of danger and/or risk it will be? Is the risk sustainable for society?"

Advertisement

People—specifically kids—have been known to bully robots. When Salvini and his team conducted their behavioural study with robots at a public demonstration in South Korea, they found that young people "tended to react to the robot's presence with extreme curiosity and, quite often, to treat them aggressively."

Just last month, a research team from Japan followed up on Salvini's research and published a short conference paper entitled "Why do children abuse robots?"

"Some children perceived the robot as human-like, but they still showed abusive behaviour."

They observed the interactions of some children under the age of ten in a Japanese shopping mall with a white, human-sized robot, and documented "serious abusive behaviours with physical contact such as kicking, punching, beating, folding arms, and moving (bending the joints of robot's arm and head)."

Some kids didn't even quit when a pre-programmed reaction from the mauled robot asked them to stop harassing it.

To counter unintentional robot retaliation following human abuse, Salvini suggested a need for engineers and designers to consider risks when they program and design their robots. In other words, they should build robots that can be programmed to manage situations where they're bullied or vandalised by kids—or adults—running riot.

When I reached out to the researchers who wrote the Japanese paper, they suggested that if robots were to be used widely in society, more education was needed in schools to help children understand what robots were, what place they held in society, and how to treat them. In other words, kids need to learn to empathise with robots just as they need to learn to empathise with each other.

But simply making robots appear more human-like, they said, isn't the answer. "Human likeness probably isn't that powerful a way to moderate robot abuse," Tatsuya Nomura, lead author of the study and a professor in the department of media informatics at Ryukoku University, told me over email. "Some children in the field study perceived the robot as human-like, but they still showed abusive behaviour."

It might be a while yet before our supposed overlords can muster the agility or balance to grab us like a robotic version of King Kong. But while engineers, ethicists, and designers are tinkering with the mechanisms and legislations around our robotic counterparts; it's probably not a great idea to go around trying to kick any robotic ass.

Worried about killer robots? Maybe you should be. Watch Motherboard's documentary Inhuman Kind