Watch This Robot Arm Play Catch
It can snatch a flying object out of the air in milliseconds.
This robot arm can catch flying objects in a fraction of a second, and it learned from the best: humans.
Just like you might teach your kid to catch a tennis ball by demonstrating the action and then throwing it at them to have a go over and over again, so this robot learned by a kind of “trial and error” approach. Developed at the Learning Algorithms and Systems Laboratory at Switzerland’s EPFL, it can now catch objects as varied as a water bottle, tennis racquet, a hammer, and a cardboard box in under five hundredths of a second.
PhD candidate Ashwini Shukla, who worked on the catching algorithms for the robot arm, explained how it learned to move with such finesse. “There are two basic things to catching an object in flight: One is to actually predict the flight of the object, and the other is for the robot to actually move its arm to the predicted location where the catching is supposed to happen,” he explained. The researchers outlined their solution in the journal IEEE Transactions on Robotics.
To build a mathematical model to satisfy the first of these, they threw objects toward the robot but didn’t require to it to try to catch them; instead, it just watched their flight path via 18 infrared cameras. Those data points informed the model to predict the motion of the object and where it would land.
When it came to actually reaching out for the missile, the robot learned by observing the masters at work. It scrutinised the way humans move their hands and fingers to catch stuff through data collected from wearable sensors in a “data glove.”
The glove contained sensors that recorded details such as how much the person wearing it curls their fingers, to teach the robot arm how to grasp the various shapes being lobbed in its direction. “So the robot doesn’t have to actually compute anything explicitly in real time, and it can just rely on these models to generate motion,” said Shukla.
The result is a lot more realistically varied than, say, Disney's throwing bot; rather than keeping its hand in a fixed place and in a fixed cupped shape, it starts with its palm flat and the fingers and joints twist and bend and reach for different objects coming at them from different directions.
Some of the things thrown at it aren’t easy, either. The water bottle, for instance, is half full, which makes its trajectory more unpredictable as the liquid sloshes around mid-flight—and the robot’s only supposed to catch the tennis racquet by the handle. Shukla said it currently has about a 60-70 percent success rate but that many of those considered fails were down to technical issues like the requirement for it to catch things “softly.”
Shukla told me the applications of the tech isn’t really about making robots catch stuff; the project is a broader investigation into how these kind of models can help robots react to their surroundings in a split second. “The underlying principles of these algorithms is that a machine should be able to react in real time to perturbations in the environment,” he said. An autonomous car, for example, would need to be able to react to other traffic or pedestrians crossing the street with the same kind of speed.
They’re now working on fine tuning the process, like teaching the arm to catch in a way that absorbs the force of the hit rather than just snatching it from the air. Let’s just hope it doesn’t learn to throw that hammer.