Keliang Chen , Zongze Li , Fang Cui , Mao Ni , Shaoying Wang , Junlin Che , Feng Liu , Yonggang Qi , Fangwei Zhang , Jun Liu , Gan Guo , Rongrong Fu , Yunxia Huang
{"title":"FastTalker:实时音频驱动的三维高斯说话脸生成","authors":"Keliang Chen , Zongze Li , Fang Cui , Mao Ni , Shaoying Wang , Junlin Che , Feng Liu , Yonggang Qi , Fangwei Zhang , Jun Liu , Gan Guo , Rongrong Fu , Yunxia Huang","doi":"10.1016/j.imavis.2025.105573","DOIUrl":null,"url":null,"abstract":"<div><div>The performance of 3D talking head generation has shown significant im- provement over the past few years. Nevertheless, real-time rendering remains a challenge that needs to be overcome. To address this issue, we present the FastTalker framework, which uses 3D Gaussian Splatting (3DGS) for talking head generation. This method introduces an audio-driven Dynamic Neural Skinning (DNS) approach to facilitate flexible and high-fidelity talking head video generation. It first employs an adaptive FLAME mesh for sampling to obtain the initialized 3DGS. Then, Neural Skinning Networks (DNS) are used to account for the appearance changes of 3DGS. Finally, a pre-trained Audio Motion Net is utilized to model facial movements as the final dynamic driving facial signal. Experimental results demonstrate that FastTalker of- fers a rendering speed exceeding 100 FPS, making it the fastest audio-driven talking head generation method in terms of inference efficiency.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"159 ","pages":"Article 105573"},"PeriodicalIF":4.2000,"publicationDate":"2025-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FastTalker: Real-time audio-driven talking face generation with 3D Gaussian\",\"authors\":\"Keliang Chen , Zongze Li , Fang Cui , Mao Ni , Shaoying Wang , Junlin Che , Feng Liu , Yonggang Qi , Fangwei Zhang , Jun Liu , Gan Guo , Rongrong Fu , Yunxia Huang\",\"doi\":\"10.1016/j.imavis.2025.105573\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The performance of 3D talking head generation has shown significant im- provement over the past few years. Nevertheless, real-time rendering remains a challenge that needs to be overcome. To address this issue, we present the FastTalker framework, which uses 3D Gaussian Splatting (3DGS) for talking head generation. This method introduces an audio-driven Dynamic Neural Skinning (DNS) approach to facilitate flexible and high-fidelity talking head video generation. It first employs an adaptive FLAME mesh for sampling to obtain the initialized 3DGS. Then, Neural Skinning Networks (DNS) are used to account for the appearance changes of 3DGS. Finally, a pre-trained Audio Motion Net is utilized to model facial movements as the final dynamic driving facial signal. Experimental results demonstrate that FastTalker of- fers a rendering speed exceeding 100 FPS, making it the fastest audio-driven talking head generation method in terms of inference efficiency.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"159 \",\"pages\":\"Article 105573\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625001611\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625001611","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FastTalker: Real-time audio-driven talking face generation with 3D Gaussian
The performance of 3D talking head generation has shown significant im- provement over the past few years. Nevertheless, real-time rendering remains a challenge that needs to be overcome. To address this issue, we present the FastTalker framework, which uses 3D Gaussian Splatting (3DGS) for talking head generation. This method introduces an audio-driven Dynamic Neural Skinning (DNS) approach to facilitate flexible and high-fidelity talking head video generation. It first employs an adaptive FLAME mesh for sampling to obtain the initialized 3DGS. Then, Neural Skinning Networks (DNS) are used to account for the appearance changes of 3DGS. Finally, a pre-trained Audio Motion Net is utilized to model facial movements as the final dynamic driving facial signal. Experimental results demonstrate that FastTalker of- fers a rendering speed exceeding 100 FPS, making it the fastest audio-driven talking head generation method in terms of inference efficiency.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.