
Spotlight on Kamal Sen, PhD
Associate Professor and Director, Natural Sounds and Neural Coding Laboratory, Boston University
and 2023 AHRF Grant Recipient
“Most of what we know about the human brain comes from technologies like F MRI where people are lying in this machine and their brains are being imaged. We don’t know very much at all about how the brain is activated in natural settings – when people are walking around, looking at a scene, moving their heads to orient.”
Imagine a crowded party. Over the music, you hear people talking, laughter, glasses and plates clinking… maybe a dog barks as the doorbell rings. Kamal Sen, PhD, explains that in this setting our brains usually can “focus on a single conversation despite numerous distractions.” He is using a 2023 AHRF Discovery Grant to investigate auditory scene analysis in humans (also called “the cocktail party problem”), where our brains have the ability to selectively attend to specific objects in a complex scene involving multiple sounds. He wants to explore why those with hearing impairment “find such complex scenes confusing, overwhelming, and debilitating even when wearing hearing assistive devices.”
Auditory Scene Analysis
From a background in physics, Sen followed mentors into neuroscience and then auditory neuroscience. He notes, “When I started in the field, the auditory system had been studied with sounds that are quite artificial – like white noise and pure tones.” Sen’s interest in auditory scene analysis grew out of studying how songbirds process sounds in their natural environment. He adds, “We know there is a lot of richness to sounds like animal vocalizations and [human] speech. There’s a difference in the physics of the sounds themselves. And they are behaviorally more relevant; this means we are more engaged when those sounds are presented. Both of these aspects really shape the brain response. Equally surprising was that the complexity of sounds themselves shapes the filtering in the brain.”
Sen uses computational modeling to understand how the auditory cortex contributes to the recognition of complex sounds. His work “morphed” into the cocktail party problem because “we’re always communicating in the presence of background sounds – whether it’s at a café or market or a train station. The question is how that impacts communication and representation.”
Novel Device and Collaborators

Sen’s AHRF-funded project uses innovative wearable technology for measuring brain signals developed in the laboratory of David Boas, PhD, at Boston University. These wearable caps measure both EEG signals as well as fNIRS – a measure of “blood flow and blood signals” – in the brain. Sen describes complementary strengths of these measures: “The EEG provides very precise temporal signals [electrical activity over time]. The fNIRS measure has good spatial resolution. So you can really tell the activity is coming from a certain part of the brain.”
Sen adds, “When I met David and talked to him about the cocktail problem, we knew we had found a good match between technology and a problem where you’re naturally engaging with people as you’re conversing, moving your head, looking at a conversation partner.” He adds, “When we start looking at these signals for the first time in humans, we should be able to see specific types of signatures, like which areas [of the brain] are activated, how they follow a target sound of interest, and how are other sounds suppressed. I think it will remarkably advance our understanding of how the brain is activated in these situations.”
In addition to David Boas, Sen’s co-investigators are Virginia Best, PhD, who specializes in psychophysics, and Laura Lewis, PhD, who is providing specialized analysis of the EEG data.
Sen sees many ways this work can be expanded. For instance, it might lead to improvements in the way hearing aids segregate sound, enabling attention-steered hearing aids. It also could have broader implications, for instance with autism or attention deficit issues, where individuals have trouble suppressing unwanted sound stimuli.