{"title":"基于残差注意门控线性单元的端到端语音去噪深度神经网络","authors":"Seon Man Kim","doi":"10.1049/ell2.70020","DOIUrl":null,"url":null,"abstract":"<p>In this letter, an improved gated linear unit (GLU) structure for end-to-end (E2E) speech enhancement is proposed. In the U-Net structure, which is widely used as the foundational architecture for E2E deep neural network-based speech denoising, the input noisy speech signal undergoes multiple layers of encoding and is compressed into essential potential representative information at the bottleneck. The latent information is then transmitted to the decoder stage for the restoration of the target clean speech. Among these approaches, CleanUNet, a prominent state-of-the-art (SOTA) method, enhances temporal attention in latent space by employing multi-head self-attention. However, unlike the approach of applying the attention mechanism to the potentially compressed representative information of the bottleneck layer, the proposed method instead assigns the attention module to the GLU of each encoder/decoder block layer. The proposed method is validated by measuring short-term objective speech intelligibility and sound quality. The objective evaluation results indicated that the proposed method using residual-attention GLU outperformed existing methods using SOTA models such as FAIR-denoiser and CleanUNet across signal-to-noise ratios ranging from 0 to 15 dB.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70020","citationCount":"0","resultStr":"{\"title\":\"End-to-end speech-denoising deep neural network based on residual-attention gated linear units\",\"authors\":\"Seon Man Kim\",\"doi\":\"10.1049/ell2.70020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this letter, an improved gated linear unit (GLU) structure for end-to-end (E2E) speech enhancement is proposed. In the U-Net structure, which is widely used as the foundational architecture for E2E deep neural network-based speech denoising, the input noisy speech signal undergoes multiple layers of encoding and is compressed into essential potential representative information at the bottleneck. The latent information is then transmitted to the decoder stage for the restoration of the target clean speech. Among these approaches, CleanUNet, a prominent state-of-the-art (SOTA) method, enhances temporal attention in latent space by employing multi-head self-attention. However, unlike the approach of applying the attention mechanism to the potentially compressed representative information of the bottleneck layer, the proposed method instead assigns the attention module to the GLU of each encoder/decoder block layer. The proposed method is validated by measuring short-term objective speech intelligibility and sound quality. The objective evaluation results indicated that the proposed method using residual-attention GLU outperformed existing methods using SOTA models such as FAIR-denoiser and CleanUNet across signal-to-noise ratios ranging from 0 to 15 dB.</p>\",\"PeriodicalId\":11556,\"journal\":{\"name\":\"Electronics Letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70020\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronics Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70020\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics Letters","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70020","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
本文提出了一种用于端到端(E2E)语音增强的改进型门控线性单元(GLU)结构。U-Net 结构被广泛用作基于 E2E 深度神经网络的语音去噪的基础结构,在这种结构中,输入的噪声语音信号经过多层编码,并在瓶颈处被压缩成基本的潜在代表信息。然后将潜在信息传输到解码器阶段,以还原目标清晰语音。在这些方法中,CleanUNet 是一种最先进的著名方法(SOTA),它通过采用多头自注意力来增强潜空间的时间注意力。不过,与将注意力机制应用于瓶颈层的潜在压缩代表信息的方法不同,所提出的方法将注意力模块分配给每个编码器/解码器块层的 GLU。通过测量短期客观语音清晰度和音质,对所提出的方法进行了验证。客观评估结果表明,在 0 到 15 dB 的信噪比范围内,使用残差注意 GLU 的拟议方法优于使用 SOTA 模型(如 FAIR-denoiser 和 CleanUNet)的现有方法。
End-to-end speech-denoising deep neural network based on residual-attention gated linear units
In this letter, an improved gated linear unit (GLU) structure for end-to-end (E2E) speech enhancement is proposed. In the U-Net structure, which is widely used as the foundational architecture for E2E deep neural network-based speech denoising, the input noisy speech signal undergoes multiple layers of encoding and is compressed into essential potential representative information at the bottleneck. The latent information is then transmitted to the decoder stage for the restoration of the target clean speech. Among these approaches, CleanUNet, a prominent state-of-the-art (SOTA) method, enhances temporal attention in latent space by employing multi-head self-attention. However, unlike the approach of applying the attention mechanism to the potentially compressed representative information of the bottleneck layer, the proposed method instead assigns the attention module to the GLU of each encoder/decoder block layer. The proposed method is validated by measuring short-term objective speech intelligibility and sound quality. The objective evaluation results indicated that the proposed method using residual-attention GLU outperformed existing methods using SOTA models such as FAIR-denoiser and CleanUNet across signal-to-noise ratios ranging from 0 to 15 dB.
期刊介绍:
Electronics Letters is an internationally renowned peer-reviewed rapid-communication journal that publishes short original research papers every two weeks. Its broad and interdisciplinary scope covers the latest developments in all electronic engineering related fields including communication, biomedical, optical and device technologies. Electronics Letters also provides further insight into some of the latest developments through special features and interviews.
Scope
As a journal at the forefront of its field, Electronics Letters publishes papers covering all themes of electronic and electrical engineering. The major themes of the journal are listed below.
Antennas and Propagation
Biomedical and Bioinspired Technologies, Signal Processing and Applications
Control Engineering
Electromagnetism: Theory, Materials and Devices
Electronic Circuits and Systems
Image, Video and Vision Processing and Applications
Information, Computing and Communications
Instrumentation and Measurement
Microwave Technology
Optical Communications
Photonics and Opto-Electronics
Power Electronics, Energy and Sustainability
Radar, Sonar and Navigation
Semiconductor Technology
Signal Processing
MIMO