Rewiring the Brain to Create New Senses
Image: Juan Gaertner/Shutterstock

FYI.

This story is over 5 years old.

Tech

Rewiring the Brain to Create New Senses

How the brain's neuroplasticity lets us substitute one sense for another—and invent new ones.

The brain is often compared to a general purpose computing device that processes our reality and stores our memories. But unlike any machine made of silicon, the three pounds of wetware found inside our heads has the ability to rewire itself. This unique property—known as neuroplasticity—offers a host of opportunities for those willing to test the limits of their perception.

Neuroplasticity occurs due to external changes in environment, behavior, and even emotion, with the effects most noticeable in those who train to become an expert in a particular skill set. For example, it has been shown in MRIs that the posterior hippocampus—the part of the brain that deals with spatial representation of the environment—is larger in London taxi drivers, who have a high dependency on navigation skills. In addition, scientists have found that many professional musicians have notable differences in the motor, auditory, and visual-spatial regions of their brain.

Advertisement

"Your brain doesn't care where it gets information from, it just figures out what to do with it."

Now neuroscientists such as David Eagleman are exploring how this particular ability of the brain can be leveraged to create new sensory experiences. Together with graduate student Scott Novich and the Baylor College of Medicine's Laboratory for Perception and Action, Eagleman has developed the VEST device (Versatile Extra-Sensory Transducer). This wearable technology has been designed to allow deaf individuals to "feel speech" through an array of vibrating motors arranged on their back. After extensive training, patients have been able to understand words sent to the vest via a microphone and encoded as a sequence of vibrations.

The VEST project is an example of sensory substitution—a way to bypass one traditional sensory organ by using another. In the case of VEST, touch is used to replace sound. The device was debuted at TED 2015 where Eagleman explained, "Your brain doesn't care where it gets information from, it just figures out what to do with it."

Sensory substitution reveals the ability of our brains to have a profound relationship with technology.

The earliest demonstrations of work in this space were pioneered by neuroscientist Paul Bach-y-Rita, whose research allowed for sight to be substituted for touch. Known as tactile-vision sensory substitution (TVSS), it involved taking images from a camera and converting them into touch sensations. One example is a device developed by his research group which allowed stimuli to be delivered to the tongue via a flexible electrode in the mouth. This tongue display unit (TDU) was connected to a head-mounted camera and the video feed was converted to a pattern of pulses that could be picked-up by the tongue. Each of these pulses corresponded to a pixel in the image with the user experiencing the final result as a stream of sensations.

Advertisement

Whilst TVSS requires specially designed equipment, auditory-vision sensory substitution systems can also use a mobile phone camera to translate a live-image into a soundscape. Peter Meijer's "The vOICe" (the "OIC" is emphasized deliberately, "Oh I See") is freely available software that helps blind patients recognize basic shapes by converting images to sound. Originally developed in the late 80s, Meijer has now translated the system into an Android app that has been downloaded over 250,000 times. Whilst Meijer told me that, "The vOICe is still largely an uncontrolled experiment; the majority of people who are downloading the app are sighted people who are just playing with it," he shared various practical use cases, such as "a blind lady in Germany who used The vOICe to walk along the beach to avoid destroying sandcastles with her walking cane."

For some of the long-term users of the software the effect has been profound. One of the first blind individuals to test The vOICe, Pat Fletcher, described the process of using the system as "seeing through her ears." The quasi-visual experience Fletcher (and other users) describe is largely due to sound activating areas of the brain normally associated with processing visual information from the eyes.

These first-hand reports of "seeing sound" have been backed up by studies that show the lateral-occipital tactile-visual area (or LOtv—the area of the brain that deals with vision) is activated in blind users by the soundscape generated by visual-to-auditory sensory substitution. In Pat Fletcher's case this was further supported by a study in which transcranial magnetic stimulation (TMS) was applied to her occipital lobe (visual cortex) to inhibit it working. The effect of this was a temporary and dramatic loss of her vOICe-based "sight," providing further evidence that the soundscape was being understood by the brain as visual information.

Advertisement

This video shows a blind person grasping blocks using only the sound translated from the camera of The vOICe setup to determine where they are.

A post-doctoral researcher at the Sussex Synaesthesia Lab is working with Professor Jamie Ward to take The vOICe to the next level. By using the software in combination with an infrared depth camera (currently an Xbox Kinect), researcher Giles Hamilton-Fletcher said he hopes to "provide users with information that is immediately practical." Their work, which translates depth information into sound, helps blind users to recognise and avoid obstacles. They hope that this application can be scaled in the future by taking advantage of a new trend in mobile computing. "It seems likely that phones of the future will incorporate infrared cameras that can collect depth information," said Hamilton-Fletcher. "Just look at Intel RealSense, Google Tango, and even Microsoft Hololens."

But the future of sensory substitution systems may not only lie in therapeutic or restorative applications. David Eagleman and his research team are already leading the way in exploring the possibility of creating entirely new senses, on top of simply swapping one for another. This process is called sensory addition. In Eagleman's project, it involves sending real-time data from the internet into the brain via the VEST device.

We send data from the web to our brains all the time, but to do this we have been reliant on our visual sense organ: We need to read from the glowing rectangular screens of our web-connected devices. Neuroscientist Paul Bach-y-Rita would argue that "reading can itself be considered the first sensory substitution system, because it does not occur naturally but rather is an invention that visually presents auditory information (the spoken word)," but what Eaglemen is trying to develop are "intuitive" new senses. His team is already training individuals to make informed decisions on the buying and selling of stocks by feeding stock market data into the VEST. The hope is that the pattern recognition abilities that the human brain has evolved over centuries could be leveraged to intuitively recognise patterns of successful trades.

Advertisement

"The plan is to eventually commercialise VEST," Scott Novich told me. "No doubt individuals will want to use it to feel the magnetic fields around them or feel infrared light. But through providing an open API, in combination with machine learning and some sort of feedback we expect to see individuals customise VEST with the sorts of data they want. It is likely to lead to some exciting use cases."

"We could extend our connection with other machines and objects."

Sensory addition of this kind is reliant on what Eagleman refers to during his TED Talk as the "PH Model of Evolution"—PH stands for "potato head". In other words, he envisions a time when we will be able to create new senses with the use of plug-and-play peripheral devices which expand the limits of the human "umwelt". The umwelt is a German term used in science to refer to the "surrounding" or "self-centered" world and references the fact that our objective reality is based on the limitations of what data our sense organs can pick up from the world around us. Eagleman hopes that devices like VEST will enable humans to interpret signals outside the usual perceptual range; signals that remain imperceptible since we have not evolved the biological receptors or sensory organs to make them available to us.

The examples of sensory addition are creative and varied. "We could extend our connection with other machines and objects," Hamilton-Fletcher speculated. "For example we might want to extend our sense of touch to sensors on the body-work of our car. This would help us to avoid collisions in a more intuitive way." Peter Meijer is a little more reserved with his predictions. "Theoretically you can map any information to soundscape or tactile format, although I am not convinced it will always have benefit for normally-abled people. It will depend on the use case. Sometimes it is still best to use a visual overlay, such as in the case of night vision or thermography [the ability to see heat]."

Whatever the use case, neuroplasticity reveals the brain is a rich platform for experimentation; and as the applied sciences learn more about the variety of perceptual apparatus available to non-humans and animals, discover new information on perceptual disorders, and increase their understanding of the interaction between sensory modalities, we will inevitably see more devices like Eagleman's VEST and software like Meijer's The vOICe. In short: we might be fast approaching an age of sensory cyborgs.

Jacked In is a series about brains and technology. Follow along here.