{"title":"Gated Convolutional Fusion for Time-Domain Target Speaker Extraction Network","authors":"Wenjing Liu, Chuan Xie","doi":"10.21437/interspeech.2022-961","DOIUrl":null,"url":null,"abstract":"Target speaker extraction aims to extract the target speaker’s voice from mixed utterances based on auxillary reference speech of the target speaker. A speaker embedding is usually extracted from the reference speech and fused with the learned acoustic representation. The majority of existing works perform simple operation-based fusion of concatenation. However, potential cross-modal correlation may not be effectively explored by this naive approach that directly fuse the speaker embedding into the acoustic representation. In this work, we propose a gated convolutional fusion approach by exploring global conditional modeling and trainable gating mechanism for learning so-phisticated interaction between speaker embedding and acoustic representation. Experiments on WSJ0-2mix-extr dataset proves the efficacy of the proposed fusion approach, which performs favorably against other fusion methods with considerable improvement in terms of SDRi and SI-SDRi. Moreover, our method can be flexibly incorporated into similar time-domain speaker extraction networks to attain better performance.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"5368-5372"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interspeech","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/interspeech.2022-961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Target speaker extraction aims to extract the target speaker’s voice from mixed utterances based on auxillary reference speech of the target speaker. A speaker embedding is usually extracted from the reference speech and fused with the learned acoustic representation. The majority of existing works perform simple operation-based fusion of concatenation. However, potential cross-modal correlation may not be effectively explored by this naive approach that directly fuse the speaker embedding into the acoustic representation. In this work, we propose a gated convolutional fusion approach by exploring global conditional modeling and trainable gating mechanism for learning so-phisticated interaction between speaker embedding and acoustic representation. Experiments on WSJ0-2mix-extr dataset proves the efficacy of the proposed fusion approach, which performs favorably against other fusion methods with considerable improvement in terms of SDRi and SI-SDRi. Moreover, our method can be flexibly incorporated into similar time-domain speaker extraction networks to attain better performance.