{"title":"Speech enhancement for hearing instruments: Enabling communication in adverse conditions","authors":"Rainer Martin","doi":"10.1109/WASPAA.2013.6701897","DOIUrl":null,"url":null,"abstract":"Hearing instruments are frequently used in notoriously difficult acoustic scenarios. Even for normal-hearing people ambient noise, reverberation and echoes often contribute to a degraded communication experience. The impact of these factors becomes significantly more prominent when participants suffer from a hearing loss. Nevertheless, hearing instruments are frequently used in these adverse conditions and must enable effortless communication. In this talk I will discuss challenges that are encountered in acoustic signal processing for hearing instruments. While many algorithms are motivated by the quest for a cocktail party processor and by the high-level paradigms of auditory scene analysis a careful design of statistical models and processing schemes is necessary to achieve the required performance in real world applications. Rather strict requirements result from the size of the device, the power budget, and the admissable processing latency. Starting with low-latency spectral analysis and synthesis systems for speech and music signals I will continue highlighting statistical estimation and smoothing techniques for the enhancement of noisy speech. The talk emphasizes the necessity to find a good balance between temporal and spectral resolution, processing latency, and statistical estimation errors. It concludes with single and multi-channel speech enhancement examples and an outlook towards opportunities which reside in the use of comprehensive speech processing models and distributed resources.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WASPAA.2013.6701897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Hearing instruments are frequently used in notoriously difficult acoustic scenarios. Even for normal-hearing people ambient noise, reverberation and echoes often contribute to a degraded communication experience. The impact of these factors becomes significantly more prominent when participants suffer from a hearing loss. Nevertheless, hearing instruments are frequently used in these adverse conditions and must enable effortless communication. In this talk I will discuss challenges that are encountered in acoustic signal processing for hearing instruments. While many algorithms are motivated by the quest for a cocktail party processor and by the high-level paradigms of auditory scene analysis a careful design of statistical models and processing schemes is necessary to achieve the required performance in real world applications. Rather strict requirements result from the size of the device, the power budget, and the admissable processing latency. Starting with low-latency spectral analysis and synthesis systems for speech and music signals I will continue highlighting statistical estimation and smoothing techniques for the enhancement of noisy speech. The talk emphasizes the necessity to find a good balance between temporal and spectral resolution, processing latency, and statistical estimation errors. It concludes with single and multi-channel speech enhancement examples and an outlook towards opportunities which reside in the use of comprehensive speech processing models and distributed resources.