利用多标记预测实现更快的语音-LaMA 推断

Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli
{"title":"利用多标记预测实现更快的语音-LaMA 推断","authors":"Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli","doi":"arxiv-2409.08148","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have become proficient at solving a wide variety\nof tasks, including those involving multi-modal inputs. In particular,\ninstantiating an LLM (such as LLaMA) with a speech encoder and training it on\npaired data imparts speech recognition (ASR) abilities to the decoder-only\nmodel, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of\nauto-regressive inference and the relatively large decoder, Speech-LLaMA models\nrequire relatively high inference time. In this work, we propose to speed up\nSpeech-LLaMA inference by predicting multiple tokens in the same decoding step.\nWe explore several model architectures that enable this, and investigate their\nperformance using threshold-based and verification-based inference strategies.\nWe also propose a prefix-based beam search decoding method that allows\nefficient minimum word error rate (MWER) training for such models. We evaluate\nour models on a variety of public benchmarks, where they reduce the number of\ndecoder calls by ~3.2x while maintaining or improving WER performance.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Faster Speech-LLaMA Inference with Multi-token Prediction\",\"authors\":\"Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli\",\"doi\":\"arxiv-2409.08148\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have become proficient at solving a wide variety\\nof tasks, including those involving multi-modal inputs. In particular,\\ninstantiating an LLM (such as LLaMA) with a speech encoder and training it on\\npaired data imparts speech recognition (ASR) abilities to the decoder-only\\nmodel, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of\\nauto-regressive inference and the relatively large decoder, Speech-LLaMA models\\nrequire relatively high inference time. In this work, we propose to speed up\\nSpeech-LLaMA inference by predicting multiple tokens in the same decoding step.\\nWe explore several model architectures that enable this, and investigate their\\nperformance using threshold-based and verification-based inference strategies.\\nWe also propose a prefix-based beam search decoding method that allows\\nefficient minimum word error rate (MWER) training for such models. We evaluate\\nour models on a variety of public benchmarks, where they reduce the number of\\ndecoder calls by ~3.2x while maintaining or improving WER performance.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08148\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)已经能够熟练地解决各种各样的任务,包括那些涉及多模态输入的任务。特别是,将 LLM(如 LLaMA)与语音编码器实例化,并在配对数据上对其进行训练,可使仅有解码器的模型具备语音识别(ASR)能力,因此被称为 Speech-LLaMA。然而,由于自回归推理的顺序性和相对较大的解码器,Speech-LaMA 模型需要相对较长的推理时间。在这项工作中,我们建议通过在同一解码步骤中预测多个词块来加快语音-LaMA 的推理速度。我们探索了几种能够实现这一点的模型架构,并使用基于阈值和基于验证的推理策略研究了它们的性能。我们还提出了一种基于前缀的波束搜索解码方法,该方法允许对此类模型进行高效的最小字错误率 (MWER) 训练。我们在各种公共基准上对这些模型进行了评估,结果表明它们在保持或提高 WER 性能的同时,将解码器调用次数减少了约 3.2 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Faster Speech-LLaMA Inference with Multi-token Prediction
Large language models (LLMs) have become proficient at solving a wide variety of tasks, including those involving multi-modal inputs. In particular, instantiating an LLM (such as LLaMA) with a speech encoder and training it on paired data imparts speech recognition (ASR) abilities to the decoder-only model, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of auto-regressive inference and the relatively large decoder, Speech-LLaMA models require relatively high inference time. In this work, we propose to speed up Speech-LLaMA inference by predicting multiple tokens in the same decoding step. We explore several model architectures that enable this, and investigate their performance using threshold-based and verification-based inference strategies. We also propose a prefix-based beam search decoding method that allows efficient minimum word error rate (MWER) training for such models. We evaluate our models on a variety of public benchmarks, where they reduce the number of decoder calls by ~3.2x while maintaining or improving WER performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信