Tian Lan, Yilan Lyu, Guoqiang Hui, Refuoe Mokhosi, Sen Li, Qiao Liu
{"title":"Redundant Convolutional Network With Attention Mechanism For Monaural Speech Enhancement","authors":"Tian Lan, Yilan Lyu, Guoqiang Hui, Refuoe Mokhosi, Sen Li, Qiao Liu","doi":"10.1109/ICASSP40776.2020.9053277","DOIUrl":null,"url":null,"abstract":"The redundant convolutional encoder-decoder network has proven useful in speech enhancement tasks. It can capture localized time-frequency details of speech signals through both the fully convolutional network structure and feature selection capability resulting from the encoder-decoder mechanism. However, it does not explicitly consider the signal filtering mechanism, which we regard as important for speech enhancement models. In this study, we introduce an attention mechanism into the convolutional encoderdecoder model. This mechanism adaptively filters channelwise feature responses by explicitly modeling attentions (on speech versus noise signals) between channels. Experimental results show that the proposed attention model is effective in capturing speech signals from background noise, and performs especially better in unseen noise conditions compared to other state-of-the-art models.","PeriodicalId":13127,"journal":{"name":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"29 1","pages":"6654-6658"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP40776.2020.9053277","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The redundant convolutional encoder-decoder network has proven useful in speech enhancement tasks. It can capture localized time-frequency details of speech signals through both the fully convolutional network structure and feature selection capability resulting from the encoder-decoder mechanism. However, it does not explicitly consider the signal filtering mechanism, which we regard as important for speech enhancement models. In this study, we introduce an attention mechanism into the convolutional encoderdecoder model. This mechanism adaptively filters channelwise feature responses by explicitly modeling attentions (on speech versus noise signals) between channels. Experimental results show that the proposed attention model is effective in capturing speech signals from background noise, and performs especially better in unseen noise conditions compared to other state-of-the-art models.