ERWIN is shown frowning at the camera. Image: University of Lincoln
Human beings are less than perfect. We forget things, get angry, and have our vices. So how likely is it that we would relate to a robot that’s perfect in every single way?
Not likely, according to researchers over at the University of Lincoln, who’ve just presented their findings at the International Conference on Intelligent Robots and Systems (IROS) conference in Hamburg. They say that in the future, humans will probably accept their robotic pals more quickly if they’re just as flawed as we are.
“We’re looking at the personality of a robot, and asking how we can bring that in line with what people will accept from a companion robot,” said John Murray, a principal lecturer at the University of Lincoln’s School of Computer Science, told me over the phone.
"Humans forget names, things, and appointments so we were seeing if a robot with those kinds of traits would be more appealing to people, and the initial results from the research suggests that it is," he said.
For their experiment, researchers asked members of the public to interact with two bots: the Emotional Robot With Intelligent Network (ERWIN) is capable of five different facial expressions, and Keepon, a yellow robot designed to interact with kids to measure social development. During the first half of the experiment, both robots were unaffected with any cognitive flaws. However, for the second half of the experiment, ERWIN botched up fact-remembering exercises and Keepon had mood swings.
“People seemed to warm to the more forgetful, less confident robot more than the overconfident one,” said Murray, who said he was initially surprised with the results. “We thought that people would like a robot that remembered everything, but actually people preferred the robot that got things wrong.”
Marc, a 3D-printed robot from the same lab, gives out a friendly handshake. Marc will be used for further study. Image: University of Lincoln
The logic, said Murray, paralleled the idea of the “uncanny valley,” which argues that the more humanlike and perfect a robot appears, the more creepy it will become to humans. Murray explained that their research focused on a “personality uncanny valley” whereby a human becomes less likely to warm to a robot that remembers everything to a tee, and has a perfect personality.
Given that developments are ramping up over care robots and emotional companion robots like Pepper and Jibo, Murray and his team hope that their findings will help inform future robot designs.
“If you’re interacting with a robot in your home environment, you want to be able to relate to that robot, to communicate more effectively with that robot, so what we’re doing is looking to see if we can facilitate that,” said Murray. “We’re hoping that [this research] will inform robots to be developed in a way that makes them accepted more quickly in society.”
Next up, the researchers want to conduct their experiment over a longer time frame (a period of months over weeks), and hope to combine a wider range of cognitive biases and emotions and expressions into their robot test cases.
“A large proportion of the communication we convey to one another is through our facial expressions, so by building that into robots, we can allow robots to convey that information in a way that humans understand,” explained Murray. ”So we don’t have to learn what the robot is trying to say as it’s a more natural interaction for us.”