{"title":"Convolution-augmented external attention model for time domain speech separation","authors":"Yuning Zhang, He Yan, Linshan Du, Mengxue Li","doi":"10.1117/12.2671718","DOIUrl":null,"url":null,"abstract":"The ability of the separator to capture the context-detailed features of speech signals and the number of parameters directly affect the accuracy and efficiency of speech separation in time-domain speech separation network (TasNet). This paper combines lightweight external attention with convolution and extends external attention to channel dimension; while satisfying the fine-grained extraction and modeling of spatial-channel correlation, it maintains small parameters and computation. Convolutional position coding is also used to integrate the contextual relationship and relative position information of speech features better. The above module then applies as a separator in the encoder-decoder structure based on TasNet, and a new convolution-augment external attention model for time-domain speech separation is proposed: ExConNet. The comparative experimental results show that ExConNet achieves considerable accuracy of speech separation, while its model parameters and calculation amount are significantly reduced, which can better meet the need for efficiency of speech separation.","PeriodicalId":120866,"journal":{"name":"Artificial Intelligence and Big Data Forum","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Big Data Forum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2671718","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ability of the separator to capture the context-detailed features of speech signals and the number of parameters directly affect the accuracy and efficiency of speech separation in time-domain speech separation network (TasNet). This paper combines lightweight external attention with convolution and extends external attention to channel dimension; while satisfying the fine-grained extraction and modeling of spatial-channel correlation, it maintains small parameters and computation. Convolutional position coding is also used to integrate the contextual relationship and relative position information of speech features better. The above module then applies as a separator in the encoder-decoder structure based on TasNet, and a new convolution-augment external attention model for time-domain speech separation is proposed: ExConNet. The comparative experimental results show that ExConNet achieves considerable accuracy of speech separation, while its model parameters and calculation amount are significantly reduced, which can better meet the need for efficiency of speech separation.