FYI.

This story is over 5 years old.

Tech

Emotionally Intelligent Machines Are Closer Than Ever

The line between people and machines could start blurring at warp speed.

Computers are already faster than us, more efficient, and can do our jobs better. One of the biggest remaining barriers preventing man and machine from melding together into a new race of cyborgs is that humans can understand and express emotion, and computers can't. Except that barrier’s being broken down too.

Scientists continue to make advancements in emotional artificial intelligence, also called affective computing. Most recently, “feeling” computers are being used to improve education. Researchers at North Carolina State University have developed a facial recognition software that can tell when a student is feeling frustrated, bored, etc., and respond accordingly.

Advertisement

Researchers filmed students during tutoring sessions, and laptops running the software analyzed the facial expressions in the videos and were able to accurately identify the emotions the students later reported feeling. The computers could tell when students were struggling and when they needed to be more challenged.

This level of emotional intelligence in machines could mean wonders for the education system, especially with the exploding popularity of MOOCs—online courses with sometimes hundreds of students, hundreds of miles away from the professor, who can't pay close attention to the specific needs of each individual.

Imagine what else is possible once you add emotional understanding into the artificial intelligence equation. As computers, gadgets and robots can learn, even anticipate, how we feel, it could conceivably transform the health care system, open the floodgates in the service industry, or cause any number of awkward situations in the workplace. This is what scientists in the affective computing field want to find out.

Google, a company whose ultimate search is artificial intelligence, is of course on the case. In its quest to perfect semantic search, it's developing computers that can understand context and meaning behind users' queries—essentially fostering a conversation between you and your computer. Google's Futurist-in-Chief Ray Kurzweil, a leading AI scientist, said in a recent interview with Wired that once a machine understands that kind of complex natural language, it becomes, in effect, conscious. He doesn’t think that day is far off.

Advertisement

"I've had a consistent date of 2029 for that vision," Kurzweil said. "And that doesn't just mean logical intelligence. It means emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That's actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029."

Computers of the future will be just as bad at rapping as the rest of us.

Google's not the only one working on closing the gap. There’s Affectiva, a facial recognition software that analyzes expressions and physiological responses to detect human feelings, and Beyond Verbal, an Israeli startup that determines emotion based on sound, by analyzing people’s tone of voice. Microsoft is also in the game. The Kinect tracks players’ heartbeat and physical movements and plans to use that information to gain insight into how people feel when playing the games.

And all that's to say nothing of the crop of humanoid robots popping up. The first feeling robot, the Nao, created by European scientists in 2010, not only understands emotional cues but takes it to the next level, learning to imitate them. The Nao is programmed with emotional responses for basic feelings like excitement, fear and sadness, and the more is interacts with humans, the better it gets at expressing those feelings. It can learn to remember faces and behavior, and form attachments to certain people—essentially, it develops a personality.

Advertisement

Is this really a good thing? No one’s going to argue against technology that can help educators do a better job, but at what point do emotionally intelligent machines stop being useful tools for humans and start being a menace we can no longer control?

First a computer learn to understand emotions, then express them—does it feel them too? Will it then act on them? That doesn't sound like something we want to find out. Star Trek might have been onto something by making Spock emotionless, or at least really, really good at suppressing them.

These debates aren’t the stuff of sci-fi anymore. Emotional artificial intelligence is coming of age alongside wearable tech and the internet of everything. If computers can monitor our every move, every minute, compile that data and then communicate it to other smart devices, the line between people and machines could start blurring at warp speed.

On the other hand emotions are complicated; we humans barely understand our own. Intelligent machines—so far—can only recognize and remember patterns in data from the physiological symptoms of emotions. Furrowed brow means worried; high heart rate means excited; loud voice means angry.

That leaves a lot of room for error. Some emotions aren't detectable in that way (could a computer pick up on, say, loneliness?), and others are hard to tell apart (does that rapid heartbeat indicate terror, or love?). We’re still a ways off from computers being able to perceive the full spectrum of emotions, let alone really feel them, no matter how much it seems they do. The reign of humans will continue yet.