FYI.

This story is over 5 years old.

Tech

The Most Valiant Attempts to Program Our Five Senses Into Robots

Because the only thing better than stopping to smell the roses is letting a machine do it for you.
Image: Shutterstock

Ever since humans first envisioned robots, we've thought about how to make the machines more like us. Robots compete against us on game shows, and rendezvous with us in the bedroom (or at least, make virtual sex feel real). But part of being human is sensing the world around us in a particular way, and doing it all at the same time.

This is much more complicated than it seems, as scientists haven't fully unraveled how we're able to sense what we do; it's both our hardware and software that contain codes that are difficult to crack. Still, scientists power through, discovering how their own senses work while crafting artificial versions of them. Here are some of the most valiant attempts to get robots to taste, smell, touch, hear, and see in the most human way possible.

Advertisement

TASTE

To date, robotic taste tests have primarily been concerned with—wait for it—alcohol. In 2013, a group of Spanish researchers published a beer-sampling study conducted with the "electronic tongue" they created. Each of the 21 sensors was sensitive to a different chemical compound, which enabled the "tongue" to distinguish between types of beer about 82 percent of the time.

More recently, researchers at Aarhus University in Denmark detected changes in the most common protein found in saliva to determine how the different compounds in the wine affected it. This method is more sensitive than others, better able to taste a wine's astringency (dryness or sourness), and may even have applications to prevent and detect diseases like Alzheimer's, Parkinson's and Huntington's.

SMELL

Researchers have been trying to get robots to smell since the early 1980s. It's not just because a sense of smell is so automatic for humans; smelling robots could have myriad applications, from sniffing for cancer or bombs to assessing the veracity of a wine's vintage.

Part of being human is sensing the world in a particular way, and doing it all at the same time

The most recent effort is from Blanca Lorena Villareal, a Mexican postdoc who created sensors that measured the concentration of particular chemicals in the air over time. With a finely tuned algorithm, these sensors are able to smell more continuously than earlier projects, so the computer can quickly identify the source of the smell. Villareal has modified the algorithm to detect various bodily fluids so her robot can be used to find victims of natural disasters.

Advertisement

TOUCH

Constantly sensing the temperature, humidity, and pressure of everything around us can sometimes make us uncomfortable. This has researchers trying to make so-called "e-skin" as good as the real thing to benefit burn victims or to enhance prosthetics.

From an engineering perspective, this is challenging: the skin would need to be low voltage (which means current batteries wouldn't work), flexible, and capable of sensing more than one factor simultaneously. But last year, a group of researchers in Israel appeared to have achieved just that: a complex of organic molecules, called ligands, which connects and protects the gold nanoparticles that serve as sensors. This layer is placed on top of a resin, which is flexible and also allows the sensors to interact with one another, detecting the chemicals, temperature and humidity around them.

HEARING

Humans have been using robots to hear since the invention of the telephone in the late 1800s—but getting robots to really listen has been another matter. The hardware was there by 1876, in the form of a microphone, which was able to convert the sounds of language into an electronic signal. But until recently, the software wasn't sophisticated enough for the computer to understand those sounds.

When a computer gets an input from a microphone, the software enables it to compare these signals to a giant database of similar signals, called a lexicon. Once it's decoded the input, the computer can to respond in a way it was programmed, either by talking words back or putting words into written form.

Advertisement

Of course, if you've tried to use early Siri analogs or a voice menu when calling the bank, you know that it can be a bit buggy. But this software is getting better every day and is used in everything from controlling airplanes to transcribing your medical record.

VISION

Much like hearing, human vision is so important to us that we were quick to design machines that could do it. This took the form of early cameras, like the  Daguerreotypes that came around in 1836. As their speed increased, picture quality improved, and physical size shrank drastically, cameras have become a permanent fixture in daily life, for every purpose from surveillance to Skype to selfies.

But now we want our robot eyes to interpret the data, too—specifically, the data of our own faces. This latest iteration is facial recognition software, used by both the FBI and Facebook. It works by recognizing various facial features, like someone's jaw and eyes, and quantifies the distance between them to create a digital image of that person that the software can recognize.

Even though Facebook claims its system can recognize people with 97 percent accuracy, it still has trouble detecting faces in 3D, if they're not looking directly at the camera. For the moment, it seems that the software has hit a wall, but given the demand for it, developers may overcome the 3D hurdle sooner than we think.

These days, robots are more human than ever, experiencing the world in small flashes of how we, well, sense it. Some people predict that robots will be sensing everything around them in a short period of time. But if our experience now is any indication, the machines still have a long way to go.

With additional reporting by Jordan Pearson.