Yiwen Guan, Viet Anh Trinh, Vivek Voleti, Jacob Whitehill
{"title":"多模态语音变换解码器:多种模式何时能提高准确性?","authors":"Yiwen Guan, Viet Anh Trinh, Vivek Voleti, Jacob Whitehill","doi":"arxiv-2409.09221","DOIUrl":null,"url":null,"abstract":"Decoder-only discrete-token language models have recently achieved\nsignificant success in automatic speech recognition. However, systematic\nanalyses of how different modalities impact performance in specific scenarios\nremain limited. In this paper, we investigate the effects of multiple\nmodalities on recognition accuracy on both synthetic and real-world datasets.\nOur experiments suggest that: (1) Integrating more modalities can increase\naccuracy; in particular, our paper is, to our best knowledge, the first to show\nthe benefit of combining audio, image context, and lip information; (2) Images\nas a supplementary modality for speech recognition provide the greatest benefit\nat moderate noise levels, moreover, they exhibit a different trend compared to\ninherently synchronized modalities like lip movements; (3) Performance improves\non both synthetic and real-world datasets when the most relevant visual\ninformation is filtered as a preprocessing step.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-modal Speech Transformer Decoders: When Do Multiple Modalities Improve Accuracy?\",\"authors\":\"Yiwen Guan, Viet Anh Trinh, Vivek Voleti, Jacob Whitehill\",\"doi\":\"arxiv-2409.09221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decoder-only discrete-token language models have recently achieved\\nsignificant success in automatic speech recognition. However, systematic\\nanalyses of how different modalities impact performance in specific scenarios\\nremain limited. In this paper, we investigate the effects of multiple\\nmodalities on recognition accuracy on both synthetic and real-world datasets.\\nOur experiments suggest that: (1) Integrating more modalities can increase\\naccuracy; in particular, our paper is, to our best knowledge, the first to show\\nthe benefit of combining audio, image context, and lip information; (2) Images\\nas a supplementary modality for speech recognition provide the greatest benefit\\nat moderate noise levels, moreover, they exhibit a different trend compared to\\ninherently synchronized modalities like lip movements; (3) Performance improves\\non both synthetic and real-world datasets when the most relevant visual\\ninformation is filtered as a preprocessing step.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"17 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-modal Speech Transformer Decoders: When Do Multiple Modalities Improve Accuracy?
Decoder-only discrete-token language models have recently achieved
significant success in automatic speech recognition. However, systematic
analyses of how different modalities impact performance in specific scenarios
remain limited. In this paper, we investigate the effects of multiple
modalities on recognition accuracy on both synthetic and real-world datasets.
Our experiments suggest that: (1) Integrating more modalities can increase
accuracy; in particular, our paper is, to our best knowledge, the first to show
the benefit of combining audio, image context, and lip information; (2) Images
as a supplementary modality for speech recognition provide the greatest benefit
at moderate noise levels, moreover, they exhibit a different trend compared to
inherently synchronized modalities like lip movements; (3) Performance improves
on both synthetic and real-world datasets when the most relevant visual
information is filtered as a preprocessing step.