超低延迟语音增强技术--一项综合研究

Haibin Wu, Sebastian Braun
{"title":"超低延迟语音增强技术--一项综合研究","authors":"Haibin Wu, Sebastian Braun","doi":"arxiv-2409.10358","DOIUrl":null,"url":null,"abstract":"Speech enhancement models should meet very low latency requirements typically\nsmaller than 5 ms for hearing assistive devices. While various low-latency\ntechniques have been proposed, comparing these methods in a controlled setup\nusing DNNs remains blank. Previous papers have variations in task, training\ndata, scripts, and evaluation settings, which make fair comparison impossible.\nMoreover, all methods are tested on small, simulated datasets, making it\ndifficult to fairly assess their performance in real-world conditions, which\ncould impact the reliability of scientific findings. To address these issues,\nwe comprehensively investigate various low-latency techniques using consistent\ntraining on large-scale data and evaluate with more relevant metrics on\nreal-world data. Specifically, we explore the effectiveness of asymmetric\nwindows, learnable windows, adaptive time domain filterbanks, and the\nfuture-frame prediction technique. Additionally, we examine whether increasing\nthe model size can compensate for the reduced window size, as well as the novel\nMamba architecture in low-latency environments.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ultra-Low Latency Speech Enhancement - A Comprehensive Study\",\"authors\":\"Haibin Wu, Sebastian Braun\",\"doi\":\"arxiv-2409.10358\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Speech enhancement models should meet very low latency requirements typically\\nsmaller than 5 ms for hearing assistive devices. While various low-latency\\ntechniques have been proposed, comparing these methods in a controlled setup\\nusing DNNs remains blank. Previous papers have variations in task, training\\ndata, scripts, and evaluation settings, which make fair comparison impossible.\\nMoreover, all methods are tested on small, simulated datasets, making it\\ndifficult to fairly assess their performance in real-world conditions, which\\ncould impact the reliability of scientific findings. To address these issues,\\nwe comprehensively investigate various low-latency techniques using consistent\\ntraining on large-scale data and evaluate with more relevant metrics on\\nreal-world data. Specifically, we explore the effectiveness of asymmetric\\nwindows, learnable windows, adaptive time domain filterbanks, and the\\nfuture-frame prediction technique. Additionally, we examine whether increasing\\nthe model size can compensate for the reduced window size, as well as the novel\\nMamba architecture in low-latency environments.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10358\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

语音增强模型应满足非常低的延迟要求,通常应小于 5 毫秒,用于听力辅助设备。虽然已经提出了各种低延迟技术,但在受控设置中使用 DNN 对这些方法进行比较仍是空白。此外,所有方法都是在小型模拟数据集上测试的,很难公平地评估它们在真实世界条件下的性能,这可能会影响科学研究结果的可靠性。为了解决这些问题,我们在大规模数据上使用一致性训练全面研究了各种低延迟技术,并在真实世界数据上使用更多相关指标进行评估。具体来说,我们探讨了非对称窗口、可学习窗口、自适应时域滤波器库和未来帧预测技术的有效性。此外,我们还研究了增加模型大小是否能补偿窗口大小的减少,以及低延迟环境下的新型 Mamba 架构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ultra-Low Latency Speech Enhancement - A Comprehensive Study
Speech enhancement models should meet very low latency requirements typically smaller than 5 ms for hearing assistive devices. While various low-latency techniques have been proposed, comparing these methods in a controlled setup using DNNs remains blank. Previous papers have variations in task, training data, scripts, and evaluation settings, which make fair comparison impossible. Moreover, all methods are tested on small, simulated datasets, making it difficult to fairly assess their performance in real-world conditions, which could impact the reliability of scientific findings. To address these issues, we comprehensively investigate various low-latency techniques using consistent training on large-scale data and evaluate with more relevant metrics on real-world data. Specifically, we explore the effectiveness of asymmetric windows, learnable windows, adaptive time domain filterbanks, and the future-frame prediction technique. Additionally, we examine whether increasing the model size can compensate for the reduced window size, as well as the novel Mamba architecture in low-latency environments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信