SEPDIFF: Speech Separation Based on Denoising Diffusion Model

Bo-Cheng Chen, Chao Wu, Wenbin Zhao
{"title":"SEPDIFF: Speech Separation Based on Denoising Diffusion Model","authors":"Bo-Cheng Chen, Chao Wu, Wenbin Zhao","doi":"10.1109/ICASSP49357.2023.10095979","DOIUrl":null,"url":null,"abstract":"Speech separation aims to extract multiple speech sources from mixed signals. In this paper, we propose SepDiff - a monaural speech separation method based on the denoising diffusion model (diffusion model). By modifying the diffusion and reverse process, we show that the diffusion model achieves an impressive performance on speech separation. To generate speech sources, we use mel spectrogram of the mixture as a condition in the training procedure and insert it in every step of the sampling procedure. We propose a novel DNN structure to leverage local and global speech information through successive feature channel attention and dilated 2-D convolution blocks on multi-resolution time-frequency features. We use a neural vocoder to get waveform from the generated mel spectrogram. We evaluate SepDiff on LibriMix datasets. Compared to SepFormer approach, SepDiff yields a higher mean opinion score (MOS) of 0.11.","PeriodicalId":113072,"journal":{"name":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP49357.2023.10095979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Speech separation aims to extract multiple speech sources from mixed signals. In this paper, we propose SepDiff - a monaural speech separation method based on the denoising diffusion model (diffusion model). By modifying the diffusion and reverse process, we show that the diffusion model achieves an impressive performance on speech separation. To generate speech sources, we use mel spectrogram of the mixture as a condition in the training procedure and insert it in every step of the sampling procedure. We propose a novel DNN structure to leverage local and global speech information through successive feature channel attention and dilated 2-D convolution blocks on multi-resolution time-frequency features. We use a neural vocoder to get waveform from the generated mel spectrogram. We evaluate SepDiff on LibriMix datasets. Compared to SepFormer approach, SepDiff yields a higher mean opinion score (MOS) of 0.11.
基于去噪扩散模型的语音分离
语音分离的目的是从混合信号中提取多个语音源。本文提出了一种基于去噪扩散模型(diffusion model)的单音语音分离方法SepDiff。通过对扩散和反向过程的改进,我们证明了扩散模型在语音分离方面取得了令人印象深刻的性能。为了生成语音源,我们将混合的mel谱图作为训练过程中的条件,并将其插入到采样过程的每一步中。我们提出了一种新的深度神经网络结构,通过对多分辨率时频特征的连续特征通道关注和扩展的二维卷积块来利用局部和全局语音信息。我们使用神经声码器从生成的mel谱图中获得波形。我们在LibriMix数据集上评估SepDiff。与SepFormer方法相比,SepDiff的平均意见评分(MOS)更高,为0.11。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信