Ideal-LLM:整合双编码器和语言适应性 LLM,实现多语种语音到文本的转换

Hongfei Xue, Wei Ren, Xuelong Geng, Kun Wei, Longhao Li, Qijie Shao, Linju Yang, Kai Diao, Lei Xie
{"title":"Ideal-LLM:整合双编码器和语言适应性 LLM,实现多语种语音到文本的转换","authors":"Hongfei Xue, Wei Ren, Xuelong Geng, Kun Wei, Longhao Li, Qijie Shao, Linju Yang, Kai Diao, Lei Xie","doi":"arxiv-2409.11214","DOIUrl":null,"url":null,"abstract":"Integrating audio encoders with LLMs through connectors has enabled these\nmodels to process and comprehend audio modalities, significantly enhancing\nspeech-to-text tasks, including automatic speech recognition (ASR) and\nautomatic speech translation (AST). However, these methods often overlook the\ncritical aspect of language adaptation in multilingual settings, relying\ninstead on multilingual data without adequately addressing language\ndifferences. To address this gap, we propose the Ideal-LLM model, which employs\ndual multilingual encoders to enrich language feature information and utilizes\na language-adapted connector to target the adaptation of each language\nspecifically. By leveraging the complementary strengths of Whisper and MMS\nencoders, our approach ensures richer multilingual representations.\nAdditionally, the language-adapted connector enhances modal transformation via\na language weight selector tailored for each language. Experimental results\ndemonstrate that Ideal-LLM significantly improves ASR performance, achieving a\n32.6% relative reduction in average word error rates compared to the standard\nspeech encoder integrated with LLMs and yields an average BLEU score of 36.78\nfor AST task.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ideal-LLM: Integrating Dual Encoders and Language-Adapted LLM for Multilingual Speech-to-Text\",\"authors\":\"Hongfei Xue, Wei Ren, Xuelong Geng, Kun Wei, Longhao Li, Qijie Shao, Linju Yang, Kai Diao, Lei Xie\",\"doi\":\"arxiv-2409.11214\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Integrating audio encoders with LLMs through connectors has enabled these\\nmodels to process and comprehend audio modalities, significantly enhancing\\nspeech-to-text tasks, including automatic speech recognition (ASR) and\\nautomatic speech translation (AST). However, these methods often overlook the\\ncritical aspect of language adaptation in multilingual settings, relying\\ninstead on multilingual data without adequately addressing language\\ndifferences. To address this gap, we propose the Ideal-LLM model, which employs\\ndual multilingual encoders to enrich language feature information and utilizes\\na language-adapted connector to target the adaptation of each language\\nspecifically. By leveraging the complementary strengths of Whisper and MMS\\nencoders, our approach ensures richer multilingual representations.\\nAdditionally, the language-adapted connector enhances modal transformation via\\na language weight selector tailored for each language. Experimental results\\ndemonstrate that Ideal-LLM significantly improves ASR performance, achieving a\\n32.6% relative reduction in average word error rates compared to the standard\\nspeech encoder integrated with LLMs and yields an average BLEU score of 36.78\\nfor AST task.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11214\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

通过连接器将音频编码器与 LLM 集成在一起,使这些模型能够处理和理解音频模式,极大地增强了语音到文本的任务,包括自动语音识别(ASR)和自动语音翻译(AST)。然而,这些方法往往忽略了多语言环境下语言适应的关键问题,而是依赖于多语言数据,没有充分解决语言差异问题。为了弥补这一不足,我们提出了 Ideal-LLM 模型,该模型采用双多语言编码器来丰富语言特征信息,并利用语言适配连接器来针对每种语言进行适配。通过利用 Whisper 和 MMS 编码器的互补优势,我们的方法确保了更丰富的多语言表征。此外,语言适配连接器通过为每种语言量身定制的语言权重选择器增强了模态转换。实验结果表明,Ideal-LLM 显著提高了 ASR 性能,与集成了 LLM 的标准语音编码器相比,平均单词错误率相对降低了 32.6%,AST 任务的平均 BLEU 得分为 36.78。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ideal-LLM: Integrating Dual Encoders and Language-Adapted LLM for Multilingual Speech-to-Text
Integrating audio encoders with LLMs through connectors has enabled these models to process and comprehend audio modalities, significantly enhancing speech-to-text tasks, including automatic speech recognition (ASR) and automatic speech translation (AST). However, these methods often overlook the critical aspect of language adaptation in multilingual settings, relying instead on multilingual data without adequately addressing language differences. To address this gap, we propose the Ideal-LLM model, which employs dual multilingual encoders to enrich language feature information and utilizes a language-adapted connector to target the adaptation of each language specifically. By leveraging the complementary strengths of Whisper and MMS encoders, our approach ensures richer multilingual representations. Additionally, the language-adapted connector enhances modal transformation via a language weight selector tailored for each language. Experimental results demonstrate that Ideal-LLM significantly improves ASR performance, achieving a 32.6% relative reduction in average word error rates compared to the standard speech encoder integrated with LLMs and yields an average BLEU score of 36.78 for AST task.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信