FYI.

This story is over 5 years old.

Tech

How to Teach Manners to a Robot

Meet the roboethicist training machines to be polite.
The robot politely lets the man off the elevator before getting on. Image: Open Roboethics Initiative/YouTube

Usually when we talk about roboethics, the field of research that deals with the moral and social implications of developing artificially intelligent machines, we’re trying to avoid dystopian imaginings of evil robot overlords turning against humans, or more realistic concerns like how to stop a factory robot’s heavy metal arm from whacking its coworker’s head off.

But some, including AJung Moon, a University of British Columbia researcher and member of the Open Roboethics Initiative, believe there are even more practical, subtle issues that need to be tackled first—like teaching robots how to be polite.

Advertisement

The question of tact is becoming increasingly relevant now that humans are starting to interact with robots on an everyday basis, in everyday situations. Take Baxter, the robot designed by Rethink Robotics that’s being tested to work in coffee shops and other social jobs. No one wants a rude barista serving them espresso, or working on their team.

If robots act like dicks, it’s not going to help humans accept this new social paradigm.

“If a robot barista is dropping coffees all over people, that’s not going to be a good interaction,” Moon told me. “We’re seeing robots pop up all over the place, and in different industries. It’s really essential for robots to be able to interact with people in a very natural way. There’s a very big link to how accepting people are going to be toward the technology.”

Though still preliminary, so far Moon’s research has seen some impressive results. The university team was able to program a Willow Garage PR2 robot to act courteously in various scenarios. Moon described the study results in the blog Footnote last week, which was spotted by Mashable.

In one experiment, researchers tasked the PR2 with delivering a piece of mail to the floor above and programmed it to use the elevator to do so. The elevator door opened to reveal a person inside either carrying nothing, carrying a heavy box, or in a wheelchair. The robot then had to decide on the most polite course of action.

Advertisement

In cases where the mail was not urgent, the PR2 either yielded to the person if they were carrying a heavy box or in a wheelchair. When the robot had urgent mail to deliver, it was kind enough to ask the person if they were in a hurry. If the answer was yes, the robot cordially extended its arm and let the person pass.

To program a robot to know which course of action to take in various scenarios requires a lot of data. For the elevator pilot project, the team surveyed a group of eight people to build a matrix of preferable reaction scenarios. The researchers used the responses as a set of training data, then fed that data into a machine-learning algorithm that guided the robot to map out its decisions.

Granted, the elevator experiment is limited, and not just because of its sample size. “[The experiment] is a lot dumber than you might imagine it to be,” Moon said. “We had to simplify it so it was all about that decision-making algorithm.”

But while the PR2 could make a decision once it had all the appropriate environmental inputs, it couldn’t recognize those environmental factors on its own, without the requisite dataset pre-programmed. That’s the big hulking challenge when it comes to AI machines.

Part of the answer is equipping the machines with the correct hardware, like optical sensors, so it can recognize its immediate situation. But hardware is just the beginning; to make intelligent decisions, robots need to have access to a huge amount of behavioural and environmental data. The more information feeding into the algorithm, the more the machine has to work with and learn from.

Advertisement

Cloud robotics and the burgeoning Internet of Things could form that kind of data ecosystem.

“We can’t survey everyone endlessly to find out what a robot should do in every single situation, with every individual person,” Moon explained. “We have to figure out ways to get around that problem—dealing with lots of data and still being able to manage it.”

Tapping into the cloud is a promising area of robotics study, already being explored by Google for its Cloud-Based Robot Grasping Project, and Amazon for its Kiva warehouse drones.

These systems allow any robot to access data that is uploaded from a host of individual machines to the cloud. This is crucial; as Moon explained, “Not every robot is going to go through every experience or know every object in the world.”

With the proliferation of cloud-based internet-connected devices and machines, torrents of data are available in the cloud. “You can imagine, if you have smart technologies all over your home, and if we could just integrate all these sensors to let us gather better data about these kinds of things instead of getting people to take surveys all the time—then this could be extended to a larger scale with a more accurate understanding of what’s going on,” she said.

Moon expects that interacting with polite robots may be a reality for the average person within the next five to ten years, and perfecting the technology is just the first hurdle to clear; policy, regulation, and legal concerns are another thing altogether. If a robot behaves badly and you suffer the consequences, can you sue the machine?

We’re going to need a legal framework in place to figure that out, said Moon. “We have to start thinking about if it’s going to be a liability issue on the part of the manufacturer, or if it’s going to be more a user-focused, case-by-case scenario when it comes to a lawsuit.”

To mitigate the risk of angry humans taking their mechanical brethren to court, Moon and her colleagues at the Open Roboethics Initiative are taking surveys to gauge public attitudes towards how robots should behave, “so that the people who are designing them can be informed of this, and people who are coming up with policies about this can be informed about what people think as well. Hopefully we can speed up the process of policy-making.” Their first survey subject is Google’s autonomous car.

The end goal of all the research is to enable robots to resolve disputes with humans in everyday situations. Perhaps, if we teach robots to be polite in the first place, we won’t have to worry about them taking over the world—they’ll be too busy saying “sorry.”