{"title":"Fractional feature-based speech enhancement with deep neural network","authors":"Liyun Xu, Tong Zhang","doi":"10.1016/j.specom.2023.102971","DOIUrl":null,"url":null,"abstract":"<div><p>Speech enhancement (SE) has become a considerable promise application of deep learning. Commonly, the deep neural network (DNN) in the SE task is trained to learn a mapping from the noisy features to the clean. However, the features are usually extracted in the time or frequency domain. In this paper, the improved features in the fractional domain are presented based on the flexible character of fractional Fourier transform (FRFT). First, the distribution characters and differences of the speech signal and the noise in the fractional domain are investigated. Second, the L1-optimal FRFT spectrum and the feature matrix constructed from a set of FRFT spectrums are served as the training features in DNN and applied in the SE. A series of pre-experiments conducted in various different fractional transform orders illustrate that the L1-optimal FRFT-DNN-based SE method can achieve a great enhancement result compared with the methods based on another single fractional spectrum. Moreover, the matrix of FRFT-DNN-based SE performs better under the same conditions. Finally, compared with other two typically SE models, the experiment results indicate that the proposed method could reach significantly performance in different SNRs with unseen noise types. The conclusions confirm the advantages of using the proposed improved features in the fractional domain.</p></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"153 ","pages":"Article 102971"},"PeriodicalIF":2.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S016763932300105X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Speech enhancement (SE) has become a considerable promise application of deep learning. Commonly, the deep neural network (DNN) in the SE task is trained to learn a mapping from the noisy features to the clean. However, the features are usually extracted in the time or frequency domain. In this paper, the improved features in the fractional domain are presented based on the flexible character of fractional Fourier transform (FRFT). First, the distribution characters and differences of the speech signal and the noise in the fractional domain are investigated. Second, the L1-optimal FRFT spectrum and the feature matrix constructed from a set of FRFT spectrums are served as the training features in DNN and applied in the SE. A series of pre-experiments conducted in various different fractional transform orders illustrate that the L1-optimal FRFT-DNN-based SE method can achieve a great enhancement result compared with the methods based on another single fractional spectrum. Moreover, the matrix of FRFT-DNN-based SE performs better under the same conditions. Finally, compared with other two typically SE models, the experiment results indicate that the proposed method could reach significantly performance in different SNRs with unseen noise types. The conclusions confirm the advantages of using the proposed improved features in the fractional domain.
期刊介绍:
Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results.
The journal''s primary objectives are:
• to present a forum for the advancement of human and human-machine speech communication science;
• to stimulate cross-fertilization between different fields of this domain;
• to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.