人妻少妇专区

Skip to content
Science & Technology

How does the brain cut through noise to understand speech?

SURROUND SOUND: 鈥淵our visual cortex is at the back of your brain and your auditory cortex is on the temporal lobes,鈥 says 人妻少妇专区 researcher Edmund Lalor. 鈥淗ow that information merges together in the brain is not very well understood.鈥 (Getty Images photo)

Rochester researchers investigate how visual cues enhance the brain鈥檚 ability to understand speech in noisy environments.

In a loud, crowded room, how does the human brain use visual speech cues to augment muddled audio and help the listener better understand what a speaker is saying? While most people know intuitively to look at a speaker鈥檚 lip movements and gestures to help fill in the gaps in speech comprehension, scientists do not yet know how that process works physiologically.

鈥淵our visual cortex is at the back of your brain and your auditory cortex is on the temporal lobes,鈥 says , an associate professor of and of at the 人妻少妇专区. 鈥淗ow that information merges together in the brain is not very well understood.鈥

Scientists have been chipping away at the problem, using noninvasive electroencephalography (EEG) brainwave measurements to study how people respond to basic sounds such as beeps, clicks, or simple syllables. Lalor and his team of researchers have by exploring how the specific shape of the articulators鈥攕uch as the lips and the tongue against the teeth鈥攈elp a listener determine whether somebody is saying 鈥淔鈥 or 鈥淪,鈥 or 鈥淧鈥 or 鈥淒,鈥 which in a noisy environment can sound similar.

Now Lalor wants to take the research a step further and explore the problem in more naturalistic, continuous, multisensory speech. The National Institutes of Health (NIH) is over the next five years to pursue that research. The project builds on a and was originally started by seed funding from the University鈥檚 .

To study the phenomena, Lalor鈥檚 team will monitor the brainwaves of people for whom the auditory system is especially noisy鈥攑eople who are deaf or hard of hearing and who use cochlear implants. The researchers aim to recruit 250 participants with cochlear implants, who will be asked to watch and listen to multisensory speech while wearing EEG caps that will measure their brain responses.

Close up of a person staring off camera while wearing an EEG cap.
TAKE IT FROM THE TOP: An electroencephalography (EEG) cap, which measures the electrical activity of the brain through the scalp, collects a mixture of signals coming from many different sources. (人妻少妇专区 photo / J. Adam Fenster)

鈥淭he big idea is that if people get cochlear implants implanted at age one, while maybe they鈥檝e missed out on a year of auditory input, perhaps their audio system will still wire up in a way that is fairly similar to a hearing individual,鈥 says Lalor. 鈥淗owever, people who get implanted later, say at age 12, have missed out on critical periods of development for their auditory system. As such, we hypothesize that they may use visual information they get from a speaker鈥檚 face differently or more, in some sense, because they need to rely on it more heavily to fill in information.鈥

Lalor is partnering on the study with co-principal investigator Professor Matthew Dye, who directs Rochester Institute of Technology鈥檚 doctoral program in cognitive science and the National Technical Institute for the Deaf Sensory, Perceptual, and Cognitive Ecology Center, and also serves as an adjunct faculty member at the .

Lalor says one of the biggest challenges is that the EEG cap, which measures the electrical activity of the brain through the scalp, collects a mixture of signals coming from many different sources. Measuring EEG signals in individuals with cochlear implants further complicates the process because the implants also generate electrical activity that further obscures EEG readings.

鈥淚t will require some heavy lifting on the engineering side, but we have great students here at Rochester who can help us use signal processing, engineering analysis, and computational modeling to examine these data in a different way that makes it feasible for us to use,鈥 says Lalor.

Ultimately, the team hopes that better understanding how the brain processes audiovisual information will lead to better technology to help people who are deaf or hard of hearing.