{"title":"基于空间相干的小型传声器阵列声场分类","authors":"R. Scharrer, M. Vorländer","doi":"10.1109/TASL.2013.2261813","DOIUrl":null,"url":null,"abstract":"The quality and performance of many multi-channel signal processing strategies in microphone arrays as well as mobile devices for the enhancement of speech intelligibility and audio quality depends to a large extent on the acoustic sound field that they are exposed to. As long as the assumption on the sound field is not met, the performance decreases significantly and may even yield worse results for the user than an unprocessed signal. Current hearing aids provide the user for instance with different programs to adapt the signal processing to the acoustic situation. Signal classification describes the signal content and not the type of sound field. Therefore, a further classification of the sound field, in addition to the signal classification, would increase the possibilities for an optimal adaption of the automatic program selection and the signal processing methods in mobile devices. To this end a sound field classification method is proposed that is based on the complex coherences between the input signals of distributed acoustic sensors. In addition to the general approach an exemplary setup of a hearing aid equipped with two microphone sensors is discussed. As only coherences are used, the method classifies the sound field regardless of the signal carried by it. This approach complements and extends the current signal classification approach used in common mobile devices. The method was successfully verified with simulated audio input signals and with real life examples.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2261813","citationCount":"10","resultStr":"{\"title\":\"Sound Field Classification in Small Microphone Arrays Using Spatial Coherences\",\"authors\":\"R. Scharrer, M. Vorländer\",\"doi\":\"10.1109/TASL.2013.2261813\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The quality and performance of many multi-channel signal processing strategies in microphone arrays as well as mobile devices for the enhancement of speech intelligibility and audio quality depends to a large extent on the acoustic sound field that they are exposed to. As long as the assumption on the sound field is not met, the performance decreases significantly and may even yield worse results for the user than an unprocessed signal. Current hearing aids provide the user for instance with different programs to adapt the signal processing to the acoustic situation. Signal classification describes the signal content and not the type of sound field. Therefore, a further classification of the sound field, in addition to the signal classification, would increase the possibilities for an optimal adaption of the automatic program selection and the signal processing methods in mobile devices. To this end a sound field classification method is proposed that is based on the complex coherences between the input signals of distributed acoustic sensors. In addition to the general approach an exemplary setup of a hearing aid equipped with two microphone sensors is discussed. As only coherences are used, the method classifies the sound field regardless of the signal carried by it. This approach complements and extends the current signal classification approach used in common mobile devices. The method was successfully verified with simulated audio input signals and with real life examples.\",\"PeriodicalId\":55014,\"journal\":{\"name\":\"IEEE Transactions on Audio Speech and Language Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TASL.2013.2261813\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Audio Speech and Language Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TASL.2013.2261813\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Audio Speech and Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TASL.2013.2261813","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sound Field Classification in Small Microphone Arrays Using Spatial Coherences
The quality and performance of many multi-channel signal processing strategies in microphone arrays as well as mobile devices for the enhancement of speech intelligibility and audio quality depends to a large extent on the acoustic sound field that they are exposed to. As long as the assumption on the sound field is not met, the performance decreases significantly and may even yield worse results for the user than an unprocessed signal. Current hearing aids provide the user for instance with different programs to adapt the signal processing to the acoustic situation. Signal classification describes the signal content and not the type of sound field. Therefore, a further classification of the sound field, in addition to the signal classification, would increase the possibilities for an optimal adaption of the automatic program selection and the signal processing methods in mobile devices. To this end a sound field classification method is proposed that is based on the complex coherences between the input signals of distributed acoustic sensors. In addition to the general approach an exemplary setup of a hearing aid equipped with two microphone sensors is discussed. As only coherences are used, the method classifies the sound field regardless of the signal carried by it. This approach complements and extends the current signal classification approach used in common mobile devices. The method was successfully verified with simulated audio input signals and with real life examples.
期刊介绍:
The IEEE Transactions on Audio, Speech and Language Processing covers the sciences, technologies and applications relating to the analysis, coding, enhancement, recognition and synthesis of audio, music, speech and language. In particular, audio processing also covers auditory modeling, acoustic modeling and source separation. Speech processing also covers speech production and perception, adaptation, lexical modeling and speaker recognition. Language processing also covers spoken language understanding, translation, summarization, mining, general language modeling, as well as spoken dialog systems.