Yuxuan Zhang , Zipeng Zhang , Weiwei Guo , Wei Chen , Zhaohai Liu , Houguang Liu
{"title":"LRetUNet: A U-Net-based retentive network for single-channel speech enhancement","authors":"Yuxuan Zhang , Zipeng Zhang , Weiwei Guo , Wei Chen , Zhaohai Liu , Houguang Liu","doi":"10.1016/j.csl.2025.101798","DOIUrl":null,"url":null,"abstract":"<div><div>Speech enhancement is an essential component of many user-oriented audio applications, serving as a fundamental task for achieving robust speech processing. Although numerous methods for speech enhancement have been proposed and have shown strong performance, a notable gap persists in the development of lightweight solutions that effectively balance performance with computational efficiency. This paper addresses a significant gap in the field by introducing a novel approach to speech enhancement that integrates a retentive mechanism within a U-Net architecture. The primary innovation of the proposed method is the design and implementation of a high-frequency future filter module, which utilizes the Fast Fourier Transform (FFT) to improve the model’s capacity to preserve and process high-frequency information that is essential for speech clarity. This module, in conjunction with the retentive mechanism, enables the network to preserve essential features across layers, resulting in enhanced speech enhancement performance. The proposed method was assessed utilizing the DNS (Deep Noise Suppression) and VoiceBank+DEMAND dataset, which are widely recognized benchmarks in the field of speech enhancement. The experimental results demonstrate that the proposed method achieves competitive performance while maintaining relatively low computational complexity. This characteristic renders our method particularly suitable for real-time applications, where both performance and efficiency are critical.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"93 ","pages":"Article 101798"},"PeriodicalIF":3.1000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825000233","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Speech enhancement is an essential component of many user-oriented audio applications, serving as a fundamental task for achieving robust speech processing. Although numerous methods for speech enhancement have been proposed and have shown strong performance, a notable gap persists in the development of lightweight solutions that effectively balance performance with computational efficiency. This paper addresses a significant gap in the field by introducing a novel approach to speech enhancement that integrates a retentive mechanism within a U-Net architecture. The primary innovation of the proposed method is the design and implementation of a high-frequency future filter module, which utilizes the Fast Fourier Transform (FFT) to improve the model’s capacity to preserve and process high-frequency information that is essential for speech clarity. This module, in conjunction with the retentive mechanism, enables the network to preserve essential features across layers, resulting in enhanced speech enhancement performance. The proposed method was assessed utilizing the DNS (Deep Noise Suppression) and VoiceBank+DEMAND dataset, which are widely recognized benchmarks in the field of speech enhancement. The experimental results demonstrate that the proposed method achieves competitive performance while maintaining relatively low computational complexity. This characteristic renders our method particularly suitable for real-time applications, where both performance and efficiency are critical.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.