Healthy people have five senses: hearing, speaking, seeing, smelling and touching. Everything we experience in life is multi-sensory, and therefore the information transmitted to the brain is received in several different areas. If one sense is not working properly, another sense can be trained to cover it.
It is well known, for example, that blind people generally develop an improved sense of touch and hearing to help compensate for their lack of sight. But can people who have difficulty understanding speech and sound make up for their disability through touch?
Psychology researchers from the Ivcher Institute for Brain, Cognition and Technology at Reichman University in Herzliya have developed a special technology that helps people understand speech and sound – and that in the future will allow them to detect their location as well – by using touch.
The 17-page study was published in the prestigious journal Nature: Scientific Reports under the title “Effects of training and using an audiotactile sensory substitution device on speech-in-noise understanding.”
One doesn’t have to lack one of the senses to have difficulty understanding others. We often find ourselves in situations that make it difficult for us to make out what we are being told. This can be due to the person speaking to us having a soft voice or speech that is difficult to comprehend, a noisy environment, or – what has become common since the COVID-19 pandemic began – people having to communicate from behind a face mask. While these situations are difficult for anyone, they are impossible for the hearing-impaired or deaf. This new technology is designed to help in such cases.
The researchers developed a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. In their experiment, 40 non-native-English-speaking individuals with normal hearing were asked to repeat sentences that had been distorted, simulating hearing via a cochlear implant (a small electronic device that electrically stimulates the cochlear nerve for hearing) in a noisy environment.
In some cases, vibrations on the fingertips corresponding to lower speech frequencies were added to the sentences. To simulate these frequencies, the researchers developed a system that converts sound frequencies to vibrations – an audio-tactile SSD. These devices allow the conversion of input from one sense to another, for example, hearing to touch and sight to hearing.
The researchers showed that the subjects’ level of understanding increased over the course of a 45-minute training period accompanied by visual feedback. After the training, the participants were able to understand a new set of sentences in a noisier environment and under conditions that made speech more difficult to understand. Performance improved significantly when the participants received a corresponding vibration in addition to the audio. The research clearly demonstrated that the technology is successful in improving speech comprehension.
Prof. Amir Amedi, director of the Ivcher Institute, who worked with composer and audiovisual programmer Dr. Adi Snir and Dr. Katarzyna Ciesla, explained that “with great respect for the world of neuroscience, we believe that the adult brain can also learn, in a relatively simple way, to use one combination of senses or another to better understand situations. This assumption is consistent with the institute’s previous findings showing that the brain is not divided into separate areas of specialization according to the senses, but rather according to the performance of tasks.”
For example, speech can also be understood through touch, not just through hearing. This system is helpful both for those who are hard of hearing, as well as for people who are trying to understand what is being said on a phone call or who are learning new languages, he said.
The team suggested that potential applications of their training program and the SSD could be auditory rehabilitation in patients with hearing and even sight deficits, as well as for healthy individuals in suboptimal acoustic situations.
Ciesla, a postdoctoral fellow at the institute and co-director of Reichman University’s Rosental Brain Imaging Center, added, “Right now, the next phase of our research is being carried out with people who are hearing-impaired and completely deaf. At this stage, the sensory intervention will be individually tailored to each of the participants, as a combination of sound and vibration, or for the deaf, vibration alone before the implantation of a cochlear implant. This is aimed at establishing their understanding of speech with the help of a changing vibration.”