Watch This Robot Learn to Limp Like a Wounded Animal

Scientists simulated a robot's childhood so it could keep moving after being injured.

Jordan Pearson

Jordan Pearson

Image: Antoine Cully/UPMC & Jean-Baptiste Mouret/UPMC

Robots are becoming more complex, with more moving parts and sensors, but a particularly fraught question remains as long as people can't stop kicking the shit out of them: how do you get a robot to keep on moving if one of its limbs is injured or destroyed?

One solution is to load a robot's computerized brain with a simulated childhood that gives it the past "experience" it needs to respond to an injury with a plan of action to keep walking, nearly as fast as it was before.

"If you think about how animals and humans work, when we're kids we play a lot," Jeff Clune, one of the scientists who developed the approach, told me in an interview. "And one of the things we're doing is figuring out how our body works. So, we can walk on our tippy-toes, or our heels, or the outside of our feet, or you can hop on one leg. You develop an intuition for how your body works, and the fastest way to do something. That's what our robot does."

It works like this: an algorithm developed by an international team of scientists from France and the US is given an extremely large "search space" of possibilities for motion; roughly 1047, or 10 followed by 47 0s. The algorithm then runs simulations to build a database of possible behaviours—around 13,000 different ways to walk—and visualizes them as a "map." The algorithm also notes their predicted performance values. That is, how fast and straight a given gait will make the robot walk.

In the wild, the physical robot—the scientists tried both a six-legged hexapod robot and a single arm—is only given sensors that can tell how fast and straight it's moving, to keep things simple. If the robot breaks a limb, it can't tell which leg or arm is now useless, only that it's no longer moving as fast or straight. An on-board algorithm then cycles through the map of possible gaits and eliminates whole families of similar ways to walk at a time, testing the performance of each method as it goes. Essentially, the robot takes its past, simulated experiences and builds upon them to adapt

After seven or 10 tries, Clune told me, the robot has found a new way to walk that closely matches the performance of its original gait—essentially, limping away like a wounded animal.

The idea for giving a robot a childhood was developed by Jeff Clune from the University of Wyoming, Jean-Baptiste Mouret from the French Institute for Research In Computer Science and Automation (INRIA), and Antoine Cully and Danesh Tarapore from the French Institute for Intelligent Systems and Robotics. The results of the team's work were published today in Nature.

"You could take a childhood off the shelf from a similar robot and you'd still get much of the way there"

Besides children, non-human mammals have been the inspiration for other ways to correct a crippled robot's failings, too. Researchers from Georgia Tech devised a method for a falling robot to predict which landing position will result in the least damage, using felines' flawless falls as bio-inspiration.

Clune and his colleagues' approach is novel, however, because it really doesn't matter what kind of robot is using it. Remember, they tested their algorithms on an insectoid six-legged robot and a robot arm. In both cases, their system allowed the robots to keep on performing their tasks in new ways after an injury.

"We think that it would work on any robot, basically," Clune said. "Ideally, it would work best if you gave every robot it's own simulated childhood, which means that you let that robot, in simulation, figure out all the way it works. But my expectation is that if you didn't have the time to do all that, you could take a childhood off the shelf from a similar robot and you'd still get much of the way there. But that's an open research question for the future work."

Previously, Clune and Mouret devised a way for computer-simulated neuronal networks to modify themselves in order to learn more than one task at a time. As for what comes next for the team, Clune told me they want to use their algorithm with more complicated robots, like those in the Defense Advanced Research Projects Agency Robotics Challenge. One of the those robots is Boston Dynamics' Atlas, the bipedal, kung fu-kicking, humanoid military robot of your dreams and/or nightmares.

Soon, our heady, humour-filled days of watching robots fall and totally eat shit could be over, replaced by images of injured 'bots crawling away like crippled insects.