FYI.

This story is over 5 years old.

Tech

The Pageantry of Jillian Ogle's Livestreaming Robots

Motherboard spoke with Ogle about her adorable robots, the emergence of live-streaming services, and where we can take this technology from here.
Ogle and her bots.

The growing popularity of live-streaming sites like Twitch, Hitbox and Ustream have shown that people want to interact with the content they consume. This is the idea behind Let's Robot. Started by Jillian Ogle, a Bay Area-based game designer and participant in Intel's Software Innovators program, Let's Robot allows viewers of a Twitch.TV channel to control a real robot via chatroom commands. Users can do things like explore a cardboard dungeon, or use the robot's arms to stuff food in an intern's mouth, or even punch a balloon that looks like Donald Trump. But this is only the beginning. Ogle sees this as the start of a new type of entertainment, where the relationship between viewers and content are two-way and interactive. Not only that: Robots can go places we can't, like Aleppo or the Oscars.

Advertisement

Motherboard spoke with Ogle about her adorable robots, the emergence of live-streaming services, and where we can take this technology from here.

Can you tell me about your background?
I was an art director before working at Disney Interactive as a game designer. Four years ago I left that to do indie game development. I began working on this Twitch robot project part-time about 2 years ago and for the last year I've switched over to full-time and stopped working on anything else.

What was it like to go from such a large corporation to developing indie titles?
Apart from the anxiousness of not having a steady paycheck anymore, it has actually been pretty nice to work on stuff that you want. It's also intimidating, because you're responsible for yourself. I learned very quickly I had to develop a lot of skills in order to keep going. I had to take on programming, and working with game engines. Eventually, with the robots, I had to get into more lower-level programming, and electronics and mechanical engineering. It's fun and exciting, but also scary.

Can you tell us about the Twitch project?
The idea started with me wanting to make the world's first interactive live show. I thought robots were a great way to do it. I see all these live streaming services that have popped up and gotten big over the last few years. Twitch was really becoming popular around the time I decided to do this. I saw a couple of people hacking Twitch to enable the users who watched the stream to actually play the video game. When I saw that, I realized the endgame of this was controlling stuff in real life, and affecting the physical world and reality. I always wanted to do something with games and robotics. I had sort of a eureka moment.

Advertisement

You said you had to learn a lot of new skills. How much did you know about robotics at that point?
Zero. The very first robot I built is the one that went online the first time, about two years ago. It took me about a month to figure out how to make a robot that could stream to Twitch and take in commands via chat. I used a robot kit as a base. I did some research, and found out, you know, there's a difference between a micro-controller and a CPU like the Raspberry Pi*. The hardest part for me was figuring out how to get all the components talking to each other. That requires a lot of low-level programming that I never had to deal with before.

What does your team look like?
My team fluctuates a lot. For the first year, it was just me, and then I started bringing some people in. I had the help of a guy named Ryan, who was a part-time web applications engineer. He actually just recently left to another start-up. Over the summer, I had some people come on. One of the engineers has stayed on past the summer and is full-time on the project now. It's he and I, and a few part-time people who pop in and out, and friends who help with this and that. I'm actually looking to ramp up a bit on the project now.

Can you give me the technical details?
The main robots we have use a micro-controller called a Teensy*, which is sort of like an Arduino*. It has a small form-factor, so that's nice. That controls the various motors we have on the robot. It's also connected to a Raspberry Pi, which is what talks to the internet. There's a video camera on the Pi, and it streams that video wirelessly to another computer, and that computer runs a program that we created in Unity*, which is a game engine; that's what we use to process all the video, and put in graphics. For instance, we can do some augmented reality stuff, like use 3D puppets to mimic video game bad guys.

Advertisement

All the robots have wheels, so they can drive around, and RGB LED lights mounted on the front. I just put those lights on the front of them for fun, but I've noticed that the users controlling the robot spent literally hours in front of the mirror making different faces on the robot. "Let's drive around and show people our face!" It's important to them that the robot is cute. This was one thing I underestimated.

What do the games look like?
Something we just did was host a stream during the presidential debate where we dressed up our robots to look like the candidates. We made their heads inflatable balloons, with little paper wigs, and then we made these little punchy arms, sort of like Rock 'em Sock 'em Robots. So the players used the arms to punch the other candidate and pop their head. We did that whole stream simultaneously with the debate. It was hilarious.

There was another time where we drilled a hole through a table and put our intern's head through it, and then put a bunch of random food around the table and let the robots stuff his face. That was another pretty hilarious stream.

Most of the time, we're building robots on the Twitch channel. We always have a robot running around our office that's controlled by the users. We have a shared space, so they'll go annoy the other start-ups as well. Lately, we've been making various simple robots that also connect to Twitch. We'll do stuff like "donate $10 to the channel and you can slap us in the face with this robot." It's pretty silly.

Advertisement

I started all this by building dungeons out of cardboard and foam in my living room. My background was in game design, so I'm like, 'Let's make it a game.' There's definitely a narrative angle you could push; there's also the real-world exploration angle. But I started to realize it's a little bigger than that, right? With this project, you can give people access to things they couldn't access by themselves.

I'm reminded of the bomb disposal robots the military uses. I would love to hear where you see this going in the future.
You could send one of these robots to the Oscars. It could be collectively controlled by everybody that wants to go, and they can physically interact with the movie stars around them. You could put them in dangerous places, like you said; instead of sending a reporter out, who can only get so close, you can send a robot out into the hurricane or the Fukushima plant, or wherever the news is happening. That seems pretty powerful to me.

When you have a phone, that makes your audience mobile and portable. But now, I feel like what we're doing is giving the audience autonomy. Now, instead of being tethered to this human, we can go places and explore. That doesn't even have to be video; we can do it in VR. We can build a 3D representation of the space, and people can put on a VR headset, and, for example, attend a rock concert via this robot. And it's scalable, because you're putting lots of people on this single robot.

Advertisement

This is the project you're working on with Intel, right?
The project with the Early Innovation program is still very much in progress. It's basically an upgraded version of the robots I already have, but instead of streaming video from the robot, we are building a 3D representation of the environment, and streaming that over the web to anyone using VR.

It will be a basic working proof-of-concept, and I still have some work to do on it. We'll be using the Intel® RealSense™ 3D camera kit to do the simultaneous localization and mapping, and possibly an Intel® Joule™ or an Intel® NUC to process the data locally before sending it up to a server. Like with the current robots, we are using the Unity3D* game engine to process the graphics and deliver the 3D content.

This is just a demo at best, whereas my long term goal would be to allow full VR immersion to an audience of people via robots streaming data from wherever they're located.

So how many robot prototypes do you have?
Somewhere between 8 and 10 live-streaming robots. They're fairly simple, mostly, though we did build a couple of complicated ones. We built one that plays Pokemon Go*, for instance; it holds the phone and drives around outside. One of the interesting things that came of that was an email I got from somebody who participated in some of those streams. He said he was disabled, and he loves Pokemon*, but he hadn't been able to play this game, because he can't really go outside. But now, with this stream, he was able to have access to the game and see what it's all about. So he sent an email saying, 'Thank You.' That was a pretty neat little thing that happened.

Advertisement

I can see this having a lot of applicability for people with disabilities.
Definitely. Imagine you can't go to the Grand Canyon. You can't go to concerts. It might be hard for you to get up and go to a museum. Right now, if a museum wanted to give a virtual tour, it's expensive for them to get the telepresence robots. But if they were able to on-board many people onto a single robot, suddenly that cost becomes very scalable. You could have a group tour all on one robot, with one of those 360 cameras so everybody can look around, and a tour guide.

When you mention multiple people controlling one robot, how does that work? If everyone is pulling it in different directions, wouldn't it just stay in one spot?
There are several layers of a solution to that. One is, the way the robots have mostly worked up until recently—keep in mind, we're using Twitch, which has just a chat interface—is that when you type 'go forward,' the robot will go forward by a discreet amount, like a few inches. So if you want it to keep going, you have to give it a lot of input. That encourages a lot of people to participate.

When we get a lot of people in, we can switch to a voting system. But the problem with the voting system is you have to wait and tally. Since my forte is interactive design, one of the things I'm working on is a dynamic voting system that's a little bit more intuitive. Think if you're sitting on your couch and you have your remote and you can see the view from the robot, wherever it is. Then you click on the screen where you want it to go. You can see where everyone else is clicking, too. And then you can see the robot make its decision based on everyone else's input.

There's definitely several layers of solutions to handle that problem. But for now, it's more important to me to get the robots working on different platforms, like Twitch or Facebook Live, and using the interfaces they have available. In the long term, I think the robots will benefit from having a custom interface designed for thousands of people at once. Instead of telepresence, it's like crowd presence, right?

That's the magic of Twitch. Who would have thought that watching someone play a game, and interacting with them, would be such a big deal. But growing up, that was sort of happening already. When you played video games with your friends, everyone's over your shoulder yelling, 'Go here! Do that!'

Are there some things you can imagine doing but technology just isn't quite there yet?
Using virtual and augmented reality, and being able to stream more than just video. There are a lot of live-streaming platforms out there right now, but they're all offering kind of the same thing. They're all built around watching something. You sit there on your couch and watch TV; it's a one-way experience, and you're isolated. Live-streaming is sort of starting to allow the audience to talk back, just through chat. But I think in the future, things are going to get more interactive. That conversation between the audience and the content is going to become a two-way street. It's not going to take over movies or games; it's going to be something else, a new sort of genre, I feel.

The Intel® Developer Zone offers tools and how-to information to enable cross-platform app development through platform and technology information, code samples, and peer expertise in order to help developers innovate and succeed. Join communities for the Internet of Things, Intel® RealSense™ Technology, Modern Code, Machine Learning and Game Dev to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.