CTSE-Net: Resource-efficient convolutional and TF-transformer network for speech enhancement

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Nasir Saleem , Sami Bourouis , Hela Elmannai , Abeer D. Algarni
{"title":"CTSE-Net: Resource-efficient convolutional and TF-transformer network for speech enhancement","authors":"Nasir Saleem ,&nbsp;Sami Bourouis ,&nbsp;Hela Elmannai ,&nbsp;Abeer D. Algarni","doi":"10.1016/j.knosys.2025.113452","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) are powerful tools in real-time speech enhancement (SE) since they automatically learn high-level feature representations from raw audio, resulting in significant advancements. Therefore, demand for resource-efficient DNNs for speech enhancement is increasing, mainly using embedded systems. Still, a lightweight and resource-efficient DNN with optimal speech enhancement performance is a challenging task. Dual-path attention-driven architectures have shown notable performance in SE, primarily because of their ability to capture time and frequency dependencies. This paper proposes a resource-efficient SE using a codec-based dual-path time–frequency transformer (CTSE-Net) to improve noisy speech and apply it to speech recognition tasks. The proposed SE employs a codec (coder–decoder) architecture with feature calibration in skip connections to obtain fine-grained frequency components. The codec is interconnected using a dual-path time–frequency transformer incorporating time and frequency attentions. The encoder encodes a time–frequency (T–F) representation derived from the distorted compressed speech spectrum, whereas the decoder estimates the compressed magnitude spectrum of enhanced speech. Further, dedicated speech activity detection (SAD) is employed to identify speech segments in the input signals. By distinguishing speech from background noise or silence, the SAD block provides important information to the decoder for target speech enhancement. The proposed resource-efficient approach ensures attention across time–frequency and distinguishes speech from background noise, leading to more effective denoising and enhancement. Experiments indicate that CTSE-Net shows robust noise reduction and contributes to accurate speech recognition. On the benchmark VCTK+DEMAND dataset, the proposed CTSE-Net demonstrates better SE performance, achieving notable improvements in ESTOI (33.69%), PESQ (1.05), and SDR (11.36 dB) over the noisy mixture.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"317 ","pages":"Article 113452"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095070512500499X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Neural Networks (DNNs) are powerful tools in real-time speech enhancement (SE) since they automatically learn high-level feature representations from raw audio, resulting in significant advancements. Therefore, demand for resource-efficient DNNs for speech enhancement is increasing, mainly using embedded systems. Still, a lightweight and resource-efficient DNN with optimal speech enhancement performance is a challenging task. Dual-path attention-driven architectures have shown notable performance in SE, primarily because of their ability to capture time and frequency dependencies. This paper proposes a resource-efficient SE using a codec-based dual-path time–frequency transformer (CTSE-Net) to improve noisy speech and apply it to speech recognition tasks. The proposed SE employs a codec (coder–decoder) architecture with feature calibration in skip connections to obtain fine-grained frequency components. The codec is interconnected using a dual-path time–frequency transformer incorporating time and frequency attentions. The encoder encodes a time–frequency (T–F) representation derived from the distorted compressed speech spectrum, whereas the decoder estimates the compressed magnitude spectrum of enhanced speech. Further, dedicated speech activity detection (SAD) is employed to identify speech segments in the input signals. By distinguishing speech from background noise or silence, the SAD block provides important information to the decoder for target speech enhancement. The proposed resource-efficient approach ensures attention across time–frequency and distinguishes speech from background noise, leading to more effective denoising and enhancement. Experiments indicate that CTSE-Net shows robust noise reduction and contributes to accurate speech recognition. On the benchmark VCTK+DEMAND dataset, the proposed CTSE-Net demonstrates better SE performance, achieving notable improvements in ESTOI (33.69%), PESQ (1.05), and SDR (11.36 dB) over the noisy mixture.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信