FYI.

This story is over 5 years old.

Tech

Here's What Your Robot Is Thinking When It Slams into a Wall

MIT researchers designed a way to visualize a robot’s plan of action.
Image: MIT

Watching autonomous robots do their thing can be an equally hypnotic and abstruse experience—just look at these swarming drones. Even if you understand how the algorithm that organizes their actions works, making sense of why they make a particular decision—especially if it's a wrong or unexpected one—can be difficult. Now imagine that you can actually see what the bots are "thinking" before they decide which action to take.

Advertisement

Researchers at MIT have designed an augmented reality system that lets observers watch a robot's decision-making process play out in real-time via projections on a virtual landscape. The set up looks like something out of Ghost In the Shell: A dark grid awash in green circles that indicate a robot's perception area, arrows that indicate potential directions, and flashes of light that indicate when one bot is communicating with another. If a robot makes a bad call in this environment, like hitting a wall for example, researchers can quickly diagnose what went wrong, instead of digging into their code.

The system, called Measurable Augmented Reality for Prototyping Cyber-Physical Systems (MAR-CPS), first projects a virtual outdoor environment for autonomous robots to navigate.

A high-level planning CPU orchestrates their actions, and their "beliefs"—a kind of mental picture of their environment culled from sensor data—are fed into a projection CPU, along with motion capture data. This allows for the robots' "beliefs" to be projected on the ground. Researchers are then able to see what a robot is "thinking" about its environment, what probable paths it will take, and why it eventually chooses one over another.

Traditionally, researchers developing autonomous robots run a simulation alongside their test and monitor both to get an idea for what's going on. If the actual robots do something that the simulation doesn't reflect, hapless roboticists have no immediate clue what went wrong with the physical system. MAR-CPS gives them a peek into what their robots are perceiving when a test goes wrong.For researchers, this constitutes a huge advantage that could streamline the design and debugging process. For the rest of us, it simply looks fucking sweet.

Advertisement

"It's really difficult to get a sense of what's going on directly on the physical system, because we have no understanding of what the mental perception of these robotic vehicles are," Shayegan Omidshafiei, one of the system's designers, told me. "This gives us the ability to project the beliefs of the vehicles directly on top of them. So, we can look at the platform and immediately understand, this is why the decisions were made."

According to Omidshafiei, MAR-CPS is a low-cost way for researchers to test their autonomous robots safely and legally. Currently, the Federal Aviation Administration—which has become notorious for its confused approach to hobby drone regulation—bans private institutions from testing autonomous flying robots in public. Hauling robots, researchers, and a ton of lab equipment into the field is a costly and harrying task, compounded by the questionable legality of it all. The MIT researchers' platform could be used to test these technologies in a controlled environment, eventually making them safe enough for use in public.

"Now the idea is we're able to bring the outside world into an enclosed lab environment," Omidshafiei said. "Not only do you not have to bring all your equipment and researchers out into the field, but you also have more capabilities in terms of being able to simulate multiple environments at the same time. You can quickly switch between different outdoor environments, and this can be handled by one or two lab members."

Image: MIT

Besides providing the means to quickly de-bug complicated robotic systems on the cheap, MAR-CPS could also be used for entertainment. Because the system makes use of motion capture tech, participants can turn objects into props that can then be used to interact with the robots inside the virtual environment. Take some cool projections, throw in some motion capture, add a drone or two—sounds fun, right?

"You can think of a situation where you have a quad copter flying around, and you can pass a baton that's recognized as an object in our simulated world, and they can move this prop around and interact with the quad. They can navigate. You can think of it as a joystick."

Omidshafiei said that his team has already been tapped by NASA and Boeing, and those organizations are just about ready to put MAR-CPS into use for testing their own robots. Clearly, this isn't just another whacky proof-of-concept. Seeing into the "minds" of our robots is now possible, and it could help us to make them better.