{"title":"Keypoints and Action Units Jointly Drive Talking Head Generation for Video Conferencing","authors":"Wuzhen Shi;Zibang Xue;Yang Wen","doi":"10.1109/TCSVT.2025.3560369","DOIUrl":null,"url":null,"abstract":"This paper introduces a high-quality talking head generation method that is jointly driven by keypoints and action units, aiming to strike a balance between low-bandwidth transmission and high-quality generation in video conference scenarios. Existing methods for talking head generation often face limitations: they either require an excessive amount of driving information or struggle with accuracy and quality when adapted to low-bandwidth conditions. To address this, we decompose the talking head generation task into two components: a driving task, focused on information-limited control, and an enhancement task, aimed at achieving high-quality, high-definition output. Our proposed method innovatively incorporates the joint driving of keypoints and action units, improving the accuracy of pose and expression generation while remaining suitable for low-bandwidth environments. Furthermore, we implement a multi-step video quality enhancement process, targeting both the entire frame and key regions, while incorporating temporal consistency constraints. By leveraging attention mechanisms, we enhance the realism of the challenging-to-generate mouth regions and mitigate background jitter through background fusion. Finally, a prior-driven super-resolution network is employed to achieve high-quality display. Extensive experiments demonstrate that our method effectively supports low-resolution recording, low-bandwidth transmission, and high-definition display.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8692-8706"},"PeriodicalIF":11.1000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10964316/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a high-quality talking head generation method that is jointly driven by keypoints and action units, aiming to strike a balance between low-bandwidth transmission and high-quality generation in video conference scenarios. Existing methods for talking head generation often face limitations: they either require an excessive amount of driving information or struggle with accuracy and quality when adapted to low-bandwidth conditions. To address this, we decompose the talking head generation task into two components: a driving task, focused on information-limited control, and an enhancement task, aimed at achieving high-quality, high-definition output. Our proposed method innovatively incorporates the joint driving of keypoints and action units, improving the accuracy of pose and expression generation while remaining suitable for low-bandwidth environments. Furthermore, we implement a multi-step video quality enhancement process, targeting both the entire frame and key regions, while incorporating temporal consistency constraints. By leveraging attention mechanisms, we enhance the realism of the challenging-to-generate mouth regions and mitigate background jitter through background fusion. Finally, a prior-driven super-resolution network is employed to achieve high-quality display. Extensive experiments demonstrate that our method effectively supports low-resolution recording, low-bandwidth transmission, and high-definition display.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.