{"title":"基于改进训练的卡尔曼滤波和信息源提取混合声学回波抑制方法","authors":"Wolfgang Mack, Emanuël Habets","doi":"10.1109/SLT54892.2023.10023206","DOIUrl":null,"url":null,"abstract":"State-of-the-art acoustic echo and noise reduction combines adaptive filters with a deep neural network-based postfilter. While the signal-to-distortion ratio is often used for training, it is not well-defined for all echo-reduction scenarios. We propose well-defined loss functions for training and modifications of a recently proposed echo reduction system that is based on informed source extraction. The modifications include using a Kalman filter as a prefilter and a cyclical learning rate scheduler. The proposed modifications improve the performance on the blind test set of the Interspeech 2021 AEC challenge. A comparison to the challenge-winner shows that the proposed system underperforms the winner by 0.1 mean opinion score (MOS) points in double-talk echo reduction. However, it outperforms the winner by 0.3 MOS points in echo-only echo reduction. In all other scenarios, both algorithms perform comparably.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hybrid Acoustic Echo Reduction Approach Using Kalman Filtering and Informed Source Extraction with Improved Training\",\"authors\":\"Wolfgang Mack, Emanuël Habets\",\"doi\":\"10.1109/SLT54892.2023.10023206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"State-of-the-art acoustic echo and noise reduction combines adaptive filters with a deep neural network-based postfilter. While the signal-to-distortion ratio is often used for training, it is not well-defined for all echo-reduction scenarios. We propose well-defined loss functions for training and modifications of a recently proposed echo reduction system that is based on informed source extraction. The modifications include using a Kalman filter as a prefilter and a cyclical learning rate scheduler. The proposed modifications improve the performance on the blind test set of the Interspeech 2021 AEC challenge. A comparison to the challenge-winner shows that the proposed system underperforms the winner by 0.1 mean opinion score (MOS) points in double-talk echo reduction. However, it outperforms the winner by 0.3 MOS points in echo-only echo reduction. In all other scenarios, both algorithms perform comparably.\",\"PeriodicalId\":352002,\"journal\":{\"name\":\"2022 IEEE Spoken Language Technology Workshop (SLT)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Spoken Language Technology Workshop (SLT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SLT54892.2023.10023206\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT54892.2023.10023206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Hybrid Acoustic Echo Reduction Approach Using Kalman Filtering and Informed Source Extraction with Improved Training
State-of-the-art acoustic echo and noise reduction combines adaptive filters with a deep neural network-based postfilter. While the signal-to-distortion ratio is often used for training, it is not well-defined for all echo-reduction scenarios. We propose well-defined loss functions for training and modifications of a recently proposed echo reduction system that is based on informed source extraction. The modifications include using a Kalman filter as a prefilter and a cyclical learning rate scheduler. The proposed modifications improve the performance on the blind test set of the Interspeech 2021 AEC challenge. A comparison to the challenge-winner shows that the proposed system underperforms the winner by 0.1 mean opinion score (MOS) points in double-talk echo reduction. However, it outperforms the winner by 0.3 MOS points in echo-only echo reduction. In all other scenarios, both algorithms perform comparably.