The physics of hearing
University of California - Santa Barbara Science New Aug 01, 2017
Humans have an uncommon aural ability: In a room full of people all engaged in separate conversations, we can push aside the extraneous voices and background noise to hear one particular speaker. Similarly, in a music venue, we can enjoy the performance of a soloist as comfortably as we can a full orchestra more than 100 decibels louder.
Exactly how this happens – how we make sense of sound – is not fully understood. Scientists who study the biophysics and neurobiology of hearing and the information theory of complex auditory signals are among the group now investigating those underlying mechanics at UC Santa BarbaraÂs Kavli Institute for Theoretical Physics (KITP).
Funded by the National Institutes of Health, ÂPhysics of Hearing: From Neurobiology to Information Theory and Back is a synergistic research program at KITP that brings together scientists from different fields to study a common topic.
ÂWe have a wide array of scientists, including statistical physicists, neurobiologists, physiologists, computer scientists and mathematicians, said program coordinator Tobias Reichenbach, a senior lecturer in the Department of Bioengineering at Imperial College London. ÂWe expect that these different perspectives will yield significant progress in understanding the neurobiology of hearing and oral communication as well as speech–recognition technology.Â
ÂWe lack an understanding of how a complex auditory scene is decomposed into its individual signals such as speech, said program coordinator Maria Geffen, an assistant professor at the University of Pennsylvania, whose Laboratory of Auditory Coding combines computational and biological approaches to study the neuronal mechanisms for auditory perception and learning.
During the eight–week program, scientists have been examining from a variety of perspectives how neural networks process sound. A multidisciplinary approach is necessary, they say, because complex natural sounds, which are redundant and contain a variety of frequencies, are difficult to unravel.
ÂThis program looks at both biology and technology and explores how each informs the other, Reichenbach said. ÂBuilding new collaborations among people from different disciplines is essential for addressing such complexity, for discovering how the brain actually processes these complex sounds and for replicating this intricate process in hearing aids or in speech–recognition algorithms.Â
ÂUnderstanding how to leverage what we learn about different parts of the brain and different approaches should lead to improvements in the design of hearing aids, cochlear implants and different kinds of prostheses, Geffen said. ÂOver the past 10 years, we have made tremendous progress in understanding the complexity of the brain structures that support hearing and speech perception at the true systems level by integrating information between the central areas in the brain and the ear.Â
Go to Original
Exactly how this happens – how we make sense of sound – is not fully understood. Scientists who study the biophysics and neurobiology of hearing and the information theory of complex auditory signals are among the group now investigating those underlying mechanics at UC Santa BarbaraÂs Kavli Institute for Theoretical Physics (KITP).
Funded by the National Institutes of Health, ÂPhysics of Hearing: From Neurobiology to Information Theory and Back is a synergistic research program at KITP that brings together scientists from different fields to study a common topic.
ÂWe have a wide array of scientists, including statistical physicists, neurobiologists, physiologists, computer scientists and mathematicians, said program coordinator Tobias Reichenbach, a senior lecturer in the Department of Bioengineering at Imperial College London. ÂWe expect that these different perspectives will yield significant progress in understanding the neurobiology of hearing and oral communication as well as speech–recognition technology.Â
ÂWe lack an understanding of how a complex auditory scene is decomposed into its individual signals such as speech, said program coordinator Maria Geffen, an assistant professor at the University of Pennsylvania, whose Laboratory of Auditory Coding combines computational and biological approaches to study the neuronal mechanisms for auditory perception and learning.
During the eight–week program, scientists have been examining from a variety of perspectives how neural networks process sound. A multidisciplinary approach is necessary, they say, because complex natural sounds, which are redundant and contain a variety of frequencies, are difficult to unravel.
ÂThis program looks at both biology and technology and explores how each informs the other, Reichenbach said. ÂBuilding new collaborations among people from different disciplines is essential for addressing such complexity, for discovering how the brain actually processes these complex sounds and for replicating this intricate process in hearing aids or in speech–recognition algorithms.Â
ÂUnderstanding how to leverage what we learn about different parts of the brain and different approaches should lead to improvements in the design of hearing aids, cochlear implants and different kinds of prostheses, Geffen said. ÂOver the past 10 years, we have made tremendous progress in understanding the complexity of the brain structures that support hearing and speech perception at the true systems level by integrating information between the central areas in the brain and the ear.Â
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
-
Exclusive Write-ups & Webinars by KOLs
-
Daily Quiz by specialty
-
Paid Market Research Surveys
-
Case discussions, News & Journals' summaries