{"title":"Towards decoupling frontend enhancement and backend recognition in monaural robust ASR","authors":"Yufeng Yang , Ashutosh Pandey , DeLiang Wang","doi":"10.1016/j.csl.2025.101821","DOIUrl":null,"url":null,"abstract":"<div><div>It has been shown that the intelligibility of noisy speech can be improved by speech enhancement (SE) algorithms. However, monaural SE has not been established as an effective frontend for automatic speech recognition (ASR) in noisy conditions compared to an ASR model trained on noisy speech directly. The divide between SE and ASR impedes the progress of robust ASR systems, especially as SE has made major advances in recent years. This paper focuses on eliminating this divide with an ARN (attentive recurrent network) time-domain, a TF-CrossNet time–frequency domain, and an MP-SENet magnitude-phase based enhancement model. The proposed systems decouple frontend enhancement and backend ASR, with the latter trained only on clean speech. Results on the WSJ, CHiME-2, LibriSpeech, and CHiME-4 corpora demonstrate that ARN, TF-CrossNet, and MP-SENet enhanced speech all translate to improved ASR results in noisy and reverberant environments, and generalize well to real acoustic scenarios. The proposed system outperforms the baselines trained on corrupted speech directly. Furthermore, it cuts the previous best word error rate (WER) on CHiME-2 by 28.4% relatively with a 5.6% WER, and achieves <span><math><mrow><mn>3</mn><mo>.</mo><mn>3</mn><mo>/</mo><mn>4</mn><mo>.</mo><mn>4</mn><mtext>%</mtext></mrow></math></span> WER on single-channel CHiME-4 simulated/real test data without training on CHiME-4. We also observe consistent improvements using noise-robust Whisper as the backend ASR model.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"95 ","pages":"Article 101821"},"PeriodicalIF":3.1000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825000464","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
It has been shown that the intelligibility of noisy speech can be improved by speech enhancement (SE) algorithms. However, monaural SE has not been established as an effective frontend for automatic speech recognition (ASR) in noisy conditions compared to an ASR model trained on noisy speech directly. The divide between SE and ASR impedes the progress of robust ASR systems, especially as SE has made major advances in recent years. This paper focuses on eliminating this divide with an ARN (attentive recurrent network) time-domain, a TF-CrossNet time–frequency domain, and an MP-SENet magnitude-phase based enhancement model. The proposed systems decouple frontend enhancement and backend ASR, with the latter trained only on clean speech. Results on the WSJ, CHiME-2, LibriSpeech, and CHiME-4 corpora demonstrate that ARN, TF-CrossNet, and MP-SENet enhanced speech all translate to improved ASR results in noisy and reverberant environments, and generalize well to real acoustic scenarios. The proposed system outperforms the baselines trained on corrupted speech directly. Furthermore, it cuts the previous best word error rate (WER) on CHiME-2 by 28.4% relatively with a 5.6% WER, and achieves WER on single-channel CHiME-4 simulated/real test data without training on CHiME-4. We also observe consistent improvements using noise-robust Whisper as the backend ASR model.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.