{"title":"Latent Mixup Knowledge Distillation for Single Channel Speech Enhancement","authors":"Behnam Gholami;Mostafa El-Khamy;Kee-Bong Song","doi":"10.1109/JSTSP.2024.3524022","DOIUrl":null,"url":null,"abstract":"Traditional speech enhancement methods often rely on complex signal processing algorithms, which may not be efficient for real-time applications or may suffer from limitations in handling various types of noise. Deploying complex Deep Neural Network (DNN) models in resource-constrained environments can be challenging due to their high computational requirements. In this paper, we propose a Knowledge Distillation (KD) method for speech enhancement leveraging the information stored in the intermediate latent features of a very complex DNN (teacher) model to train a smaller, more efficient (student) model. Experimental results on a two benchmark speech enhancement datasets demonstrate the effectiveness of the proposed KD method for speech enhancement. The student model trained with knowledge distillation outperforms SOTA speech enhancement methods and achieves comparable performance to the teacher model. Furthermore, our method achieves significant reductions in computational complexity, making it suitable for deployment in resource-constrained environments such as embedded systems and mobile devices.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 8","pages":"1544-1556"},"PeriodicalIF":8.7000,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10832563/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional speech enhancement methods often rely on complex signal processing algorithms, which may not be efficient for real-time applications or may suffer from limitations in handling various types of noise. Deploying complex Deep Neural Network (DNN) models in resource-constrained environments can be challenging due to their high computational requirements. In this paper, we propose a Knowledge Distillation (KD) method for speech enhancement leveraging the information stored in the intermediate latent features of a very complex DNN (teacher) model to train a smaller, more efficient (student) model. Experimental results on a two benchmark speech enhancement datasets demonstrate the effectiveness of the proposed KD method for speech enhancement. The student model trained with knowledge distillation outperforms SOTA speech enhancement methods and achieves comparable performance to the teacher model. Furthermore, our method achieves significant reductions in computational complexity, making it suitable for deployment in resource-constrained environments such as embedded systems and mobile devices.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.