Watch Google Research's Robots Learn Hand-Eye Coordination

It takes tons of practice, along with a neural network.

Google Research has been working on a new project in which a group of robots successfully learns something akin to hand-eye coordination.

Robots still struggle with mimicking even simple human motor skills, like grasping objects, as the company wrote in a post to its Google Research Blog on Tuesday.

"At a high level, current robots typically follow a sense-plan-act paradigm...This approach is modular and often effective, but tends to break down in the kinds of cluttered natural environments that are typical of the real world," wrote research scientist Sergey Levine.

The research team wanted to see if it could quickly train robots to react to unpredictable environments much more quickly and instinctively—which it did by enabling the robots to learn from each other.

Fourteen robots were networked together and tasked with learning how to grasp efficiently. At the end of each day, the robots' experiences were sent to a neural network, which was then hooked up to the robots the next day.

"In essence, the robot is constantly predicting, by observing the motion of its own hand, which kind of subsequent motion will maximize its chances of success," Levine wrote. "The result is continuous feedback: what we might call hand-eye coordination."

So the robots learn to make new judgments based solely on learning from the neural network that's based in turn on their own practice. None of the new actions they exhibited had been programmed into the system; instead they were picked up organically.

The blog post contains more videos and nitty-gritty info, but I'm already convinced: Make way for our robot overlords.