{"title":"Progressive Skip Connection Improves Consistency of Diffusion-Based Speech Enhancement","authors":"Yue Lei;Xucheng Luo;Wenxin Tai;Fan Zhou","doi":"10.1109/LSP.2025.3560622","DOIUrl":null,"url":null,"abstract":"Recent advancements in generative modeling have successfully integrated denoising diffusion probabilistic models (DDPMs) into the domain of speech enhancement (SE). Despite their considerable advantages in generalizability, ensuring semantic consistency of the generated samples with the condition signal remains a formidable challenge. Inspired by techniques addressing posterior collapse in variational autoencoders, we explore skip connections within diffusion-based SE models to improve consistency with condition signals. However, experiments reveal that simply adding skip connections is ineffective and even counterproductive. We argue that the independence between the predictive target and the condition signal causes this failure. To address this, we modify the training objective from predicting random Gaussian noise to predicting clean speech and propose a progressive skip connection strategy to mitigate the decrease in mutual information between the layer's output and the condition signal as network depth increases. Experiments on two standard datasets demonstrate the effectiveness of our approach in both seen and unseen scenarios.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1650-1654"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10964569/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in generative modeling have successfully integrated denoising diffusion probabilistic models (DDPMs) into the domain of speech enhancement (SE). Despite their considerable advantages in generalizability, ensuring semantic consistency of the generated samples with the condition signal remains a formidable challenge. Inspired by techniques addressing posterior collapse in variational autoencoders, we explore skip connections within diffusion-based SE models to improve consistency with condition signals. However, experiments reveal that simply adding skip connections is ineffective and even counterproductive. We argue that the independence between the predictive target and the condition signal causes this failure. To address this, we modify the training objective from predicting random Gaussian noise to predicting clean speech and propose a progressive skip connection strategy to mitigate the decrease in mutual information between the layer's output and the condition signal as network depth increases. Experiments on two standard datasets demonstrate the effectiveness of our approach in both seen and unseen scenarios.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.