May 19, 2023 | Anna Flanagan
While some might assume that hearing in noise is a problem of aging, it turns out that children can also have difficulty understanding speech in noisy environments

A common complaint that audiologists hear from clients coming in for hearing assessments is difficulty hearing in noisy backgrounds. It’s a problem that affects millions of adults and can become more of a problem with age, but it also affects children and adolescents as well.
While the problem might be common, adequate ways of addressing the problem are not. Effective solutions require a deep understanding of the reasons the problem is occurring. Three faculty in the Department of Speech and Hearing Science—Assistant Professor Mary Flaherty, Associate Professor Dan Fogerty and Assistant Professor Ian Mertes—focus their research in this area with the goal of gaining that deep understanding and finding solutions in order to improve the quality of life of those who struggle with understanding speech in noise.
“If people are unable to hear clearly in noisy environments such as restaurants, it can negatively impact their ability to socialize and communicate in those settings and, ultimately, to enjoy those settings,” Mertes said.
Mechanics are There; Understanding is Not
While some might assume that hearing in noise is a problem of aging, it turns out that children can also have difficulty understanding speech in noisy environments. It’s known that children with normal hearing have fully developed auditory systems by their first birthday, but that their brains take longer—into their teenage years—to develop the ability to process speech in noise effectively. What isn’t known is why this is. That’s what Mary Flaherty wants to find out.
“We know it has something to do with attention and sound-source segregation, separating different sounds in the environment,” she said. “We also know children just need more information than adults. They aren’t as good as adults at putting puzzles together when they are missing pieces. But we don’t really understand what it is that children need to help them.”
Flaherty’s concern is that children who struggle with understanding speech in complex acoustic environments may fall behind in school. Moreover, the true problem may go undiagnosed and the child labeled negatively by teachers and classmates. And if this is true of children with normal hearing, imagine the extra burden faced by children with hearing loss who experience greater difficulty understanding speech in noise.
Adults use cues such as voice pitch to focus on one speaker in noise and ignore everyone else. Children cannot do that. So what cues can help children? Flaherty currently is investigating talker familiarity. She worked with a graduate student in audiology to develop a game that familiarizes children with a voice while they’re playing. A pilot study in which children played the game 10 minutes a day for five days found that their speech-in-noise perception for that particular voice increased. Flaherty plans to pursue research that tests this phenomenon in the classroom.
This summer, she will collaborate with researchers at Lurie Children’s Hospital of Chicago to investigate hearing-in-noise difficulties faced by children who use hearing aids. Among the issues she will investigate is whether talker familiarity also can help children with hearing loss, which has never before been studied. As she continues her research efforts, Flaherty hopes to identify primary factors that account for the long trajectory of children’s development of speech-in-noise perception, and to use the knowledge to improve hearing in noise, especially for clinical populations. She also collaborates with SHS colleague Pasquale Bottalico on classroom studies that they hope will lead to a method of predicting which children may have difficulty understanding speech in noise, identifying characteristics that they have in common, and recommending effective interventions.
More Cues, but More Potential Deficits with Age
Dan Fogerty focuses on older adults in his studies of how noise interferes with speech processing, how it impacts understanding a message and how it requires listeners to recruit other cognitive and sensory processes to help make sense of it.
A predominant perspective on how noise makes speech understanding difficult is that it exerts two primary effects: energetic masking and informational masking.
“In energetic masking, the noise covers up the speech energy in time and frequency,” Fogerty said. “Informational masking refers to all of the other things that might make it difficult, such as the message or familiarity of a competing talker that can draw your attention.”
Sometimes the noise dominates the signal received by the brain, depriving the listener of information. Speech dominates the signal at other times, and from these glimpses of information, listeners can piece together an interpretation of what is being said. Fogerty’s research uses glimpsing theory to examine what cues are available to the listener at any given time, but also extends the theory to how speech information changes over time.
“Amplitude modulation, the temporal rhythm of speech, is critical for understanding speech,” he said. “We’re finding that if the competing sounds vary similarly to the rhythmic aspects of speech, it can make speech understanding difficult. If we separate out these properties so that noise is varying at a faster or slower rate, then people are better able to glimpse or extract information.”
Fogerty’s primary research populations are individuals who have mild or moderate hearing loss as well as individuals who are aging with the typical sensory and cognitive changes that occur but without dementia or significant cognitive decline. He also tests college-age individuals so that effects related to aging or hearing loss are clearer. One thing he notes is important to remember is that being older doesn’t always mean performing more poorly on speech understanding tasks.
“We have a lot of older adults who do just as well or better than college students on some tasks,” he said. “That’s important for us because we want to know what is preserving their ability to understand speech in noise. What strategies are they using that are particularly helpful?”
His research goals are to contribute to the design of better hearing devices, but also to address issues that might not have a technology solution.
“That’s why we’re so interested in finding out what the abilities are that people bring to the task of listening in noise, and whether certain skills can be sharpened through training,” he said.
The Physiology Behind it All
From animal and human studies, we know that when sound enters the ear, the brain has the ability to fine tune the sound by controlling how the middle and inner ear responds. Animal studies have shown that these responses can help encode sounds in background noise.
Ian Mertes is studying these top-down mechanisms in young adults with normal hearing to determine if they also help humans understand speech in noise. Both mechanisms rely on the brain stem. One mechanism contracts a muscle, which pulls on a bone of the middle ear, affecting how noise is transmitted through the auditory system. It can reduce the noise. The brain stem also can change how the inner ear amplifies sound, which also can turn down noise.
“I’m looking at how these two mechanisms, which are reflexes, work together,” Mertes said. “They may work at different frequency regions, the lower frequencies or pitches and the middle frequencies or pitches. Working together, they may help people hear in background noise.”
Using otoacoustic emissions, a clinical audiology test of inner ear function, his studies have shown the physiological mechanisms are correlated with the ability to understand speech in noise. But, he said, it’s complicated.
“It can depend on how we do the physiological measurement, the types of sounds we present to the ears, and the speech perception task,” he said. His current focus on individuals without hearing problems gives him the “best look” at normally functioning auditory systems. “They have the most robust physiological responses and are able to participate in the perceptual tasks, and that can help me create a good template for adapting those measurements when I extend my work to clinical populations.”
Working with Vanderbilt University colleague Ben Hornsby, an associate professor of hearing and speech sciences, Mertes also plans to add another auditory concept called listening effort to the physiological picture of understanding speech in noise. Do individuals with weak top-down reflexes have to put more effort into completing speech perception tasks? What are the consequences of this additional effort?
The in-depth knowledge Mertes is gaining through his research may help explain why some young adults with clinically normal hearing report having difficulty hearing in background noise, another area of interest to him.
Summing up what he hopes will be the outcome of his research program, he said, “I’d ultimately like to make a significant contribution to treatment—strengthening auditory reflexes or simulating them in devices, increasing understanding of messages while reducing the effort it takes to reach that understanding.”
Share on social

