Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli
{"title":"利用多标记预测实现更快的语音-LaMA 推断","authors":"Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli","doi":"arxiv-2409.08148","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have become proficient at solving a wide variety\nof tasks, including those involving multi-modal inputs. In particular,\ninstantiating an LLM (such as LLaMA) with a speech encoder and training it on\npaired data imparts speech recognition (ASR) abilities to the decoder-only\nmodel, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of\nauto-regressive inference and the relatively large decoder, Speech-LLaMA models\nrequire relatively high inference time. In this work, we propose to speed up\nSpeech-LLaMA inference by predicting multiple tokens in the same decoding step.\nWe explore several model architectures that enable this, and investigate their\nperformance using threshold-based and verification-based inference strategies.\nWe also propose a prefix-based beam search decoding method that allows\nefficient minimum word error rate (MWER) training for such models. We evaluate\nour models on a variety of public benchmarks, where they reduce the number of\ndecoder calls by ~3.2x while maintaining or improving WER performance.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Faster Speech-LLaMA Inference with Multi-token Prediction\",\"authors\":\"Desh Raj, Gil Keren, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli\",\"doi\":\"arxiv-2409.08148\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have become proficient at solving a wide variety\\nof tasks, including those involving multi-modal inputs. In particular,\\ninstantiating an LLM (such as LLaMA) with a speech encoder and training it on\\npaired data imparts speech recognition (ASR) abilities to the decoder-only\\nmodel, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of\\nauto-regressive inference and the relatively large decoder, Speech-LLaMA models\\nrequire relatively high inference time. In this work, we propose to speed up\\nSpeech-LLaMA inference by predicting multiple tokens in the same decoding step.\\nWe explore several model architectures that enable this, and investigate their\\nperformance using threshold-based and verification-based inference strategies.\\nWe also propose a prefix-based beam search decoding method that allows\\nefficient minimum word error rate (MWER) training for such models. We evaluate\\nour models on a variety of public benchmarks, where they reduce the number of\\ndecoder calls by ~3.2x while maintaining or improving WER performance.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08148\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Faster Speech-LLaMA Inference with Multi-token Prediction
Large language models (LLMs) have become proficient at solving a wide variety
of tasks, including those involving multi-modal inputs. In particular,
instantiating an LLM (such as LLaMA) with a speech encoder and training it on
paired data imparts speech recognition (ASR) abilities to the decoder-only
model, hence called Speech-LLaMA. Nevertheless, due to the sequential nature of
auto-regressive inference and the relatively large decoder, Speech-LLaMA models
require relatively high inference time. In this work, we propose to speed up
Speech-LLaMA inference by predicting multiple tokens in the same decoding step.
We explore several model architectures that enable this, and investigate their
performance using threshold-based and verification-based inference strategies.
We also propose a prefix-based beam search decoding method that allows
efficient minimum word error rate (MWER) training for such models. We evaluate
our models on a variety of public benchmarks, where they reduce the number of
decoder calls by ~3.2x while maintaining or improving WER performance.