arXiv - EE - Audio and Speech Processing最新文献

筛选
英文 中文
Enhancing Multilingual Speech Generation and Recognition Abilities in LLMs with Constructed Code-switched Data 利用建构的代码切换数据增强 LLM 中的多语言语音生成和识别能力
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-17 DOI: arxiv-2409.10969
Jing Xu, Daxin Tan, Jiaqi Wang, Xiao Chen
{"title":"Enhancing Multilingual Speech Generation and Recognition Abilities in LLMs with Constructed Code-switched Data","authors":"Jing Xu, Daxin Tan, Jiaqi Wang, Xiao Chen","doi":"arxiv-2409.10969","DOIUrl":"https://doi.org/arxiv-2409.10969","url":null,"abstract":"While large language models (LLMs) have been explored in the speech domain\u0000for both generation and recognition tasks, their applications are predominantly\u0000confined to the monolingual scenario, with limited exploration in multilingual\u0000and code-switched (CS) contexts. Additionally, speech generation and\u0000recognition tasks are often handled separately, such as VALL-E and Qwen-Audio.\u0000In this paper, we propose a MutltiLingual MultiTask (MLMT) model, integrating\u0000multilingual speech generation and recognition tasks within the single LLM.\u0000Furthermore, we develop an effective data construction approach that splits and\u0000concatenates words from different languages to equip LLMs with CS synthesis\u0000ability without relying on CS data. The experimental results demonstrate that\u0000our model outperforms other baselines with a comparable data scale.\u0000Furthermore, our data construction approach not only equips LLMs with CS speech\u0000synthesis capability with comparable speaker consistency and similarity to any\u0000given speaker, but also improves the performance of LLMs in multilingual speech\u0000generation and recognition tasks.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DFacePolicy: Speech-Driven 3D Facial Animation with Diffusion Policy 3DFacePolicy:采用扩散策略的语音驱动三维面部动画
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-17 DOI: arxiv-2409.10848
Xuanmeng Sha, Liyun Zhang, Tomohiro Mashita, Yuki Uranishi
{"title":"3DFacePolicy: Speech-Driven 3D Facial Animation with Diffusion Policy","authors":"Xuanmeng Sha, Liyun Zhang, Tomohiro Mashita, Yuki Uranishi","doi":"arxiv-2409.10848","DOIUrl":"https://doi.org/arxiv-2409.10848","url":null,"abstract":"Audio-driven 3D facial animation has made immersive progress both in research\u0000and application developments. The newest approaches focus on Transformer-based\u0000methods and diffusion-based methods, however, there is still gap in the\u0000vividness and emotional expression between the generated animation and real\u0000human face. To tackle this limitation, we propose 3DFacePolicy, a diffusion\u0000policy model for 3D facial animation prediction. This method generates variable\u0000and realistic human facial movements by predicting the 3D vertex trajectory on\u0000the 3D facial template with diffusion policy instead of facial generation for\u0000every frame. It takes audio and vertex states as observations to predict the\u0000vertex trajectory and imitate real human facial expressions, which keeps the\u0000continuous and natural flow of human emotions. The experiments show that our\u0000approach is effective in variable and dynamic facial motion synthesizing.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spontaneous Informal Speech Dataset for Punctuation Restoration 用于标点符号修复的自发非正式语音数据集
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11241
Xing Yi Liu, Homayoon Beigi
{"title":"Spontaneous Informal Speech Dataset for Punctuation Restoration","authors":"Xing Yi Liu, Homayoon Beigi","doi":"arxiv-2409.11241","DOIUrl":"https://doi.org/arxiv-2409.11241","url":null,"abstract":"Presently, punctuation restoration models are evaluated almost solely on\u0000well-structured, scripted corpora. On the other hand, real-world ASR systems\u0000and post-processing pipelines typically apply towards spontaneous speech with\u0000significant irregularities, stutters, and deviations from perfect grammar. To\u0000address this discrepancy, we introduce SponSpeech, a punctuation restoration\u0000dataset derived from informal speech sources, which includes punctuation and\u0000casing information. In addition to publicly releasing the dataset, we\u0000contribute a filtering pipeline that can be used to generate more data. Our\u0000filtering pipeline examines the quality of both speech audio and transcription\u0000text. We also carefully construct a ``challenging\" test set, aimed at\u0000evaluating models' ability to leverage audio information to predict otherwise\u0000grammatically ambiguous punctuation. SponSpeech is available at\u0000https://github.com/GitHubAccountAnonymous/PR, along with all code for dataset\u0000building and model runs.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Room impulse response prototyping using receiver distance estimations for high quality room equalisation algorithms 利用接收器距离估算室内脉冲响应原型,实现高质量室内均衡算法
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10131
James Brooks-Park, Martin Bo Møller, Jan Østergaard, Søren Bech, Steven van de Par
{"title":"Room impulse response prototyping using receiver distance estimations for high quality room equalisation algorithms","authors":"James Brooks-Park, Martin Bo Møller, Jan Østergaard, Søren Bech, Steven van de Par","doi":"arxiv-2409.10131","DOIUrl":"https://doi.org/arxiv-2409.10131","url":null,"abstract":"Room equalisation aims to increase the quality of loudspeaker reproduction in\u0000reverberant environments, compensating for colouration caused by imperfect room\u0000reflections and frequency dependant loudspeaker directivity. A common technique\u0000in the field of room equalisation, is to invert a prototype Room Impulse\u0000Response (RIR). Rather than inverting a single RIR at the listening position, a\u0000prototype response is composed of several responses distributed around the\u0000listening area. This paper proposes a method of impulse response prototyping,\u0000using estimated receiver positions, to form a weighted average prototype\u0000response. A method of receiver distance estimation is described, supporting the\u0000implementation of the prototype RIR. The proposed prototyping method is\u0000compared to other methods by measuring their post equalisation spectral\u0000deviation at several positions in a simulated room.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Joint Spectral and Spatial Learning with MAMBA for Multichannel Speech Enhancement 利用 MAMBA 联合频谱和空间学习进行多通道语音增强
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10376
Wenze Ren, Haibin Wu, Yi-Cheng Lin, Xuanjun Chen, Rong Chao, Kuo-Hsuan Hung, You-Jin Li, Wen-Yuan Ting, Hsin-Min Wang, Yu Tsao
{"title":"Leveraging Joint Spectral and Spatial Learning with MAMBA for Multichannel Speech Enhancement","authors":"Wenze Ren, Haibin Wu, Yi-Cheng Lin, Xuanjun Chen, Rong Chao, Kuo-Hsuan Hung, You-Jin Li, Wen-Yuan Ting, Hsin-Min Wang, Yu Tsao","doi":"arxiv-2409.10376","DOIUrl":"https://doi.org/arxiv-2409.10376","url":null,"abstract":"In multichannel speech enhancement, effectively capturing spatial and\u0000spectral information across different microphones is crucial for noise\u0000reduction. Traditional methods, such as CNN or LSTM, attempt to model the\u0000temporal dynamics of full-band and sub-band spectral and spatial features.\u0000However, these approaches face limitations in fully modeling complex temporal\u0000dependencies, especially in dynamic acoustic environments. To overcome these\u0000challenges, we modify the current advanced model McNet by introducing an\u0000improved version of Mamba, a state-space model, and further propose MCMamba.\u0000MCMamba has been completely reengineered to integrate full-band and narrow-band\u0000spatial information with sub-band and full-band spectral features, providing a\u0000more comprehensive approach to modeling spatial and spectral information. Our\u0000experimental results demonstrate that MCMamba significantly improves the\u0000modeling of spatial and spectral features in multichannel speech enhancement,\u0000outperforming McNet and achieving state-of-the-art performance on the CHiME-3\u0000dataset. Additionally, we find that Mamba performs exceptionally well in\u0000modeling spectral information.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Training Objectives for Generative Speech Enhancement 研究生成式语音增强的训练目标
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10753
Julius Richter, Danilo de Oliveira, Timo Gerkmann
{"title":"Investigating Training Objectives for Generative Speech Enhancement","authors":"Julius Richter, Danilo de Oliveira, Timo Gerkmann","doi":"arxiv-2409.10753","DOIUrl":"https://doi.org/arxiv-2409.10753","url":null,"abstract":"Generative speech enhancement has recently shown promising advancements in\u0000improving speech quality in noisy environments. Multiple diffusion-based\u0000frameworks exist, each employing distinct training objectives and learning\u0000techniques. This paper aims at explaining the differences between these\u0000frameworks by focusing our investigation on score-based generative models and\u0000Schr\"odinger bridge. We conduct a series of comprehensive experiments to\u0000compare their performance and highlight differing training behaviors.\u0000Furthermore, we propose a novel perceptual loss function tailored for the\u0000Schr\"odinger bridge framework, demonstrating enhanced performance and improved\u0000perceptual quality of the enhanced speech signals. All experimental code and\u0000pre-trained models are publicly available to facilitate further research and\u0000development in this.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
oboVox Far Field Speaker Recognition: A Novel Data Augmentation Approach with Pretrained Models oboVox 远场扬声器识别:使用预训练模型的新型数据增强方法
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10240
Muhammad Sudipto Siam Dip, Md Anik Hasan, Sapnil Sarker Bipro, Md Abdur Raiyan, Mohammod Abdul Motin
{"title":"oboVox Far Field Speaker Recognition: A Novel Data Augmentation Approach with Pretrained Models","authors":"Muhammad Sudipto Siam Dip, Md Anik Hasan, Sapnil Sarker Bipro, Md Abdur Raiyan, Mohammod Abdul Motin","doi":"arxiv-2409.10240","DOIUrl":"https://doi.org/arxiv-2409.10240","url":null,"abstract":"In this study, we address the challenge of speaker recognition using a novel\u0000data augmentation technique of adding noise to enrollment files. This technique\u0000efficiently aligns the sources of test and enrollment files, enhancing\u0000comparability. Various pre-trained models were employed, with the resnet model\u0000achieving the highest DCF of 0.84 and an EER of 13.44. The augmentation\u0000technique notably improved these results to 0.75 DCF and 12.79 EER for the\u0000resnet model. Comparative analysis revealed the superiority of resnet over\u0000models such as ECPA, Mel-spectrogram, Payonnet, and Titanet large. Results,\u0000along with different augmentation schemes, contribute to the success of RoboVox\u0000far-field speaker recognition in this paper","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RF-GML: Reference-Free Generative Machine Listener RF-GML:无参考生成机器监听器
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10210
Arijit Biswas, Guanxin Jiang
{"title":"RF-GML: Reference-Free Generative Machine Listener","authors":"Arijit Biswas, Guanxin Jiang","doi":"arxiv-2409.10210","DOIUrl":"https://doi.org/arxiv-2409.10210","url":null,"abstract":"This paper introduces a novel reference-free (RF) audio quality metric called\u0000the RF-Generative Machine Listener (RF-GML), designed to evaluate coded mono,\u0000stereo, and binaural audio at a 48 kHz sample rate. RF-GML leverages transfer\u0000learning from a state-of-the-art full-reference (FR) Generative Machine\u0000Listener (GML) with minimal architectural modifications. The term \"generative\"\u0000refers to the model's ability to generate an arbitrary number of simulated\u0000listening scores. Unlike existing RF models, RF-GML accurately predicts\u0000subjective quality scores across diverse content types and codecs. Extensive\u0000evaluations demonstrate its superiority in rating unencoded audio and\u0000distinguishing different levels of coding artifacts. RF-GML's performance and\u0000versatility make it a valuable tool for coded audio quality assessment and\u0000monitoring in various applications, all without the need for a reference\u0000signal.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-Whisper: Speech-Based Meta-ICL for ASR on Low-Resource Languages Meta-Whisper:基于语音的元智能语言(Meta-ICL),用于低资源语言的 ASR
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10429
Ming-Hao Hsu, Kuan Po Huang, Hung-yi Lee
{"title":"Meta-Whisper: Speech-Based Meta-ICL for ASR on Low-Resource Languages","authors":"Ming-Hao Hsu, Kuan Po Huang, Hung-yi Lee","doi":"arxiv-2409.10429","DOIUrl":"https://doi.org/arxiv-2409.10429","url":null,"abstract":"This paper presents Meta-Whisper, a novel approach to improve automatic\u0000speech recognition (ASR) for low-resource languages using the Whisper model. By\u0000leveraging Meta In-Context Learning (Meta-ICL) and a k-Nearest Neighbors (KNN)\u0000algorithm for sample selection, Meta-Whisper enhances Whisper's ability to\u0000recognize speech in unfamiliar languages without extensive fine-tuning.\u0000Experiments on the ML-SUPERB dataset show that Meta-Whisper significantly\u0000reduces the Character Error Rate (CER) for low-resource languages compared to\u0000the original Whisper model. This method offers a promising solution for\u0000developing more adaptable multilingual ASR systems, particularly for languages\u0000with limited resources.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems 交互式口语对话系统的高效自学习框架
arXiv - EE - Audio and Speech Processing Pub Date : 2024-09-16 DOI: arxiv-2409.10515
Hitesh Tulsiani, David M. Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister
{"title":"An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems","authors":"Hitesh Tulsiani, David M. Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister","doi":"arxiv-2409.10515","DOIUrl":"https://doi.org/arxiv-2409.10515","url":null,"abstract":"Dialog systems, such as voice assistants, are expected to engage with users\u0000in complex, evolving conversations. Unfortunately, traditional automatic speech\u0000recognition (ASR) systems deployed in such applications are usually trained to\u0000recognize each turn independently and lack the ability to adapt to the\u0000conversational context or incorporate user feedback. In this work, we introduce\u0000a general framework for ASR in dialog systems that can go beyond learning from\u0000single-turn utterances and learn over time how to adapt to both explicit\u0000supervision and implicit user feedback present in multi-turn conversations. We\u0000accomplish that by leveraging advances in student-teacher learning and\u0000context-aware dialog processing, and designing contrastive self-supervision\u0000approaches with Ohm, a new online hard-negative mining approach. We show that\u0000leveraging our new framework compared to traditional training leads to relative\u0000WER reductions of close to 10% in real-world dialog systems, and up to 26% on\u0000public synthetic data.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信