{"title":"针对分离视觉语音表征的音频引导自监督学习","authors":"Dalu Feng, Shuang Yang, Shiguang Shan, Xilin Chen","doi":"10.1007/s11704-024-3787-8","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we propose a novel two-branch framework to learn the disentangled visual speech representations based on two particular observations. Its main idea is to introduce the audio signal to guide the learning of speech-relevant cues and introduce a bottleneck to restrict the speech-irrelevant branch from learning high-frequency and fine-grained speech cues. Experiments on both the word-level and sentence-level audio-visual speech datasets LRW and LRS2-BBC show the effectiveness. Our future work is to explore more explicit auxiliary tasks and constraints beyond the reconstruction task of the speech-relevant and irrelevant branch to improve further its ability of capturing speech cues in the video. Meanwhile, it’s also a nice try to combine multiple types of knowledge representations [10] to further boost the obtained speech epresentations, which is also left for the future work.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"75 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Audio-guided self-supervised learning for disentangled visual speech representations\",\"authors\":\"Dalu Feng, Shuang Yang, Shiguang Shan, Xilin Chen\",\"doi\":\"10.1007/s11704-024-3787-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this paper, we propose a novel two-branch framework to learn the disentangled visual speech representations based on two particular observations. Its main idea is to introduce the audio signal to guide the learning of speech-relevant cues and introduce a bottleneck to restrict the speech-irrelevant branch from learning high-frequency and fine-grained speech cues. Experiments on both the word-level and sentence-level audio-visual speech datasets LRW and LRS2-BBC show the effectiveness. Our future work is to explore more explicit auxiliary tasks and constraints beyond the reconstruction task of the speech-relevant and irrelevant branch to improve further its ability of capturing speech cues in the video. Meanwhile, it’s also a nice try to combine multiple types of knowledge representations [10] to further boost the obtained speech epresentations, which is also left for the future work.</p>\",\"PeriodicalId\":12640,\"journal\":{\"name\":\"Frontiers of Computer Science\",\"volume\":\"75 1\",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers of Computer Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11704-024-3787-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers of Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11704-024-3787-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Audio-guided self-supervised learning for disentangled visual speech representations
In this paper, we propose a novel two-branch framework to learn the disentangled visual speech representations based on two particular observations. Its main idea is to introduce the audio signal to guide the learning of speech-relevant cues and introduce a bottleneck to restrict the speech-irrelevant branch from learning high-frequency and fine-grained speech cues. Experiments on both the word-level and sentence-level audio-visual speech datasets LRW and LRS2-BBC show the effectiveness. Our future work is to explore more explicit auxiliary tasks and constraints beyond the reconstruction task of the speech-relevant and irrelevant branch to improve further its ability of capturing speech cues in the video. Meanwhile, it’s also a nice try to combine multiple types of knowledge representations [10] to further boost the obtained speech epresentations, which is also left for the future work.
期刊介绍:
Frontiers of Computer Science aims to provide a forum for the publication of peer-reviewed papers to promote rapid communication and exchange between computer scientists. The journal publishes research papers and review articles in a wide range of topics, including: architecture, software, artificial intelligence, theoretical computer science, networks and communication, information systems, multimedia and graphics, information security, interdisciplinary, etc. The journal especially encourages papers from new emerging and multidisciplinary areas, as well as papers reflecting the international trends of research and development and on special topics reporting progress made by Chinese computer scientists.