基于联合嵌入预测体系结构的视觉语音识别

IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Chang Sun;Bo Qin;Hong Yang
{"title":"基于联合嵌入预测体系结构的视觉语音识别","authors":"Chang Sun;Bo Qin;Hong Yang","doi":"10.1109/OJSP.2024.3496819","DOIUrl":null,"url":null,"abstract":"Visual Speech Recognition (VSR) tasks are generally recognized to have a lower theoretical performance ceiling than Automatic Speech Recognition (ASR), owing to the inherent limitations of conveying semantic information visually. To mitigate this challenge, this paper introduces an advanced knowledge distillation approach using a Joint-Embedding Predictive Architecture (JEPA), JEP-KD, designed to utilize audio features more effectively during model training. Central to JEP-KD is including a generative network within the embedding layer in the knowledge distillation structure, which enhances the video encoder's capacity for semantic feature extraction and brings it closer to the audio features from a pre-trained ASR model's encoder. This approach aims to reduce the performance gap between VSR and ASR progressively. Moreover, a comprehensive multimodal, multistage training regimen for the JEP-KD framework is established, bolstering the robustness and efficacy of the training process. Experiment results demonstrate that JEP-KD significantly improves the performance of VSR models and demonstrates versatility across different VSR platforms, indicating its potential for broader application within other multimodal tasks.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"1147-1152"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10750407","citationCount":"0","resultStr":"{\"title\":\"JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition\",\"authors\":\"Chang Sun;Bo Qin;Hong Yang\",\"doi\":\"10.1109/OJSP.2024.3496819\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual Speech Recognition (VSR) tasks are generally recognized to have a lower theoretical performance ceiling than Automatic Speech Recognition (ASR), owing to the inherent limitations of conveying semantic information visually. To mitigate this challenge, this paper introduces an advanced knowledge distillation approach using a Joint-Embedding Predictive Architecture (JEPA), JEP-KD, designed to utilize audio features more effectively during model training. Central to JEP-KD is including a generative network within the embedding layer in the knowledge distillation structure, which enhances the video encoder's capacity for semantic feature extraction and brings it closer to the audio features from a pre-trained ASR model's encoder. This approach aims to reduce the performance gap between VSR and ASR progressively. Moreover, a comprehensive multimodal, multistage training regimen for the JEP-KD framework is established, bolstering the robustness and efficacy of the training process. Experiment results demonstrate that JEP-KD significantly improves the performance of VSR models and demonstrates versatility across different VSR platforms, indicating its potential for broader application within other multimodal tasks.\",\"PeriodicalId\":73300,\"journal\":{\"name\":\"IEEE open journal of signal processing\",\"volume\":\"5 \",\"pages\":\"1147-1152\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10750407\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of signal processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10750407/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10750407/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

视觉语音识别(Visual Speech Recognition, VSR)任务通常被认为比自动语音识别(Automatic Speech Recognition, ASR)具有更低的理论性能上限,这是由于视觉传递语义信息的固有局限性。为了缓解这一挑战,本文引入了一种先进的知识蒸馏方法,使用联合嵌入预测架构(JEPA), JEP-KD,旨在在模型训练期间更有效地利用音频特征。JEP-KD的核心是在知识蒸馏结构的嵌入层中包含一个生成网络,这增强了视频编码器的语义特征提取能力,并使其更接近预训练ASR模型编码器的音频特征。该方法旨在逐步缩小VSR和ASR之间的性能差距。此外,为JEP-KD框架建立了一个全面的多模式、多阶段的训练方案,增强了训练过程的稳健性和有效性。实验结果表明,JEP-KD显著提高了VSR模型的性能,并展示了跨不同VSR平台的通用性,表明其在其他多模态任务中的广泛应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition
Visual Speech Recognition (VSR) tasks are generally recognized to have a lower theoretical performance ceiling than Automatic Speech Recognition (ASR), owing to the inherent limitations of conveying semantic information visually. To mitigate this challenge, this paper introduces an advanced knowledge distillation approach using a Joint-Embedding Predictive Architecture (JEPA), JEP-KD, designed to utilize audio features more effectively during model training. Central to JEP-KD is including a generative network within the embedding layer in the knowledge distillation structure, which enhances the video encoder's capacity for semantic feature extraction and brings it closer to the audio features from a pre-trained ASR model's encoder. This approach aims to reduce the performance gap between VSR and ASR progressively. Moreover, a comprehensive multimodal, multistage training regimen for the JEP-KD framework is established, bolstering the robustness and efficacy of the training process. Experiment results demonstrate that JEP-KD significantly improves the performance of VSR models and demonstrates versatility across different VSR platforms, indicating its potential for broader application within other multimodal tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
0
审稿时长
22 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信