Latent Mixup Knowledge Distillation for Single Channel Speech Enhancement

IF 8.7 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Behnam Gholami;Mostafa El-Khamy;Kee-Bong Song
{"title":"Latent Mixup Knowledge Distillation for Single Channel Speech Enhancement","authors":"Behnam Gholami;Mostafa El-Khamy;Kee-Bong Song","doi":"10.1109/JSTSP.2024.3524022","DOIUrl":null,"url":null,"abstract":"Traditional speech enhancement methods often rely on complex signal processing algorithms, which may not be efficient for real-time applications or may suffer from limitations in handling various types of noise. Deploying complex Deep Neural Network (DNN) models in resource-constrained environments can be challenging due to their high computational requirements. In this paper, we propose a Knowledge Distillation (KD) method for speech enhancement leveraging the information stored in the intermediate latent features of a very complex DNN (teacher) model to train a smaller, more efficient (student) model. Experimental results on a two benchmark speech enhancement datasets demonstrate the effectiveness of the proposed KD method for speech enhancement. The student model trained with knowledge distillation outperforms SOTA speech enhancement methods and achieves comparable performance to the teacher model. Furthermore, our method achieves significant reductions in computational complexity, making it suitable for deployment in resource-constrained environments such as embedded systems and mobile devices.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 8","pages":"1544-1556"},"PeriodicalIF":8.7000,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10832563/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Traditional speech enhancement methods often rely on complex signal processing algorithms, which may not be efficient for real-time applications or may suffer from limitations in handling various types of noise. Deploying complex Deep Neural Network (DNN) models in resource-constrained environments can be challenging due to their high computational requirements. In this paper, we propose a Knowledge Distillation (KD) method for speech enhancement leveraging the information stored in the intermediate latent features of a very complex DNN (teacher) model to train a smaller, more efficient (student) model. Experimental results on a two benchmark speech enhancement datasets demonstrate the effectiveness of the proposed KD method for speech enhancement. The student model trained with knowledge distillation outperforms SOTA speech enhancement methods and achieves comparable performance to the teacher model. Furthermore, our method achieves significant reductions in computational complexity, making it suitable for deployment in resource-constrained environments such as embedded systems and mobile devices.
传统的语音增强方法通常依赖于复杂的信号处理算法,这些算法对于实时应用来说可能并不高效,或者在处理各种类型的噪声时可能会受到限制。由于计算要求较高,在资源有限的环境中部署复杂的深度神经网络(DNN)模型可能具有挑战性。在本文中,我们提出了一种用于语音增强的知识蒸馏(KD)方法,利用存储在非常复杂的 DNN(教师)模型的中间潜在特征中的信息来训练一个更小、更高效的(学生)模型。在两个基准语音增强数据集上的实验结果证明了所提出的 KD 方法在语音增强方面的有效性。采用知识提炼方法训练的学生模型优于 SOTA 语音增强方法,其性能与教师模型相当。此外,我们的方法大大降低了计算复杂度,使其适用于嵌入式系统和移动设备等资源有限的环境。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Journal of Selected Topics in Signal Processing
IEEE Journal of Selected Topics in Signal Processing 工程技术-工程:电子与电气
CiteScore
19.00
自引率
1.30%
发文量
135
审稿时长
3 months
期刊介绍: The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others. The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信