Kartik Teotia, Hyeongwoo Kim, Pablo Garrido, Marc Habermann, Mohamed Elgharib, Christian Theobalt
{"title":"高斯头像:从粗到细的表征中端到端学习可驾驶的高斯头像","authors":"Kartik Teotia, Hyeongwoo Kim, Pablo Garrido, Marc Habermann, Mohamed Elgharib, Christian Theobalt","doi":"arxiv-2409.11951","DOIUrl":null,"url":null,"abstract":"Real-time rendering of human head avatars is a cornerstone of many computer\ngraphics applications, such as augmented reality, video games, and films, to\nname a few. Recent approaches address this challenge with computationally\nefficient geometry primitives in a carefully calibrated multi-view setup.\nAlbeit producing photorealistic head renderings, it often fails to represent\ncomplex motion changes such as the mouth interior and strongly varying head\nposes. We propose a new method to generate highly dynamic and deformable human\nhead avatars from multi-view imagery in real-time. At the core of our method is\na hierarchical representation of head models that allows to capture the complex\ndynamics of facial expressions and head movements. First, with rich facial\nfeatures extracted from raw input frames, we learn to deform the coarse facial\ngeometry of the template mesh. We then initialize 3D Gaussians on the deformed\nsurface and refine their positions in a fine step. We train this coarse-to-fine\nfacial avatar model along with the head pose as a learnable parameter in an\nend-to-end framework. This enables not only controllable facial animation via\nvideo inputs, but also high-fidelity novel view synthesis of challenging facial\nexpressions, such as tongue deformations and fine-grained teeth structure under\nlarge motion changes. Moreover, it encourages the learned head avatar to\ngeneralize towards new facial expressions and head poses at inference time. We\ndemonstrate the performance of our method with comparisons against the related\nmethods on different datasets, spanning challenging facial expression sequences\nacross multiple identities. We also show the potential application of our\napproach by demonstrating a cross-identity facial performance transfer\napplication.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"64 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations\",\"authors\":\"Kartik Teotia, Hyeongwoo Kim, Pablo Garrido, Marc Habermann, Mohamed Elgharib, Christian Theobalt\",\"doi\":\"arxiv-2409.11951\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time rendering of human head avatars is a cornerstone of many computer\\ngraphics applications, such as augmented reality, video games, and films, to\\nname a few. Recent approaches address this challenge with computationally\\nefficient geometry primitives in a carefully calibrated multi-view setup.\\nAlbeit producing photorealistic head renderings, it often fails to represent\\ncomplex motion changes such as the mouth interior and strongly varying head\\nposes. We propose a new method to generate highly dynamic and deformable human\\nhead avatars from multi-view imagery in real-time. At the core of our method is\\na hierarchical representation of head models that allows to capture the complex\\ndynamics of facial expressions and head movements. First, with rich facial\\nfeatures extracted from raw input frames, we learn to deform the coarse facial\\ngeometry of the template mesh. We then initialize 3D Gaussians on the deformed\\nsurface and refine their positions in a fine step. We train this coarse-to-fine\\nfacial avatar model along with the head pose as a learnable parameter in an\\nend-to-end framework. This enables not only controllable facial animation via\\nvideo inputs, but also high-fidelity novel view synthesis of challenging facial\\nexpressions, such as tongue deformations and fine-grained teeth structure under\\nlarge motion changes. Moreover, it encourages the learned head avatar to\\ngeneralize towards new facial expressions and head poses at inference time. We\\ndemonstrate the performance of our method with comparisons against the related\\nmethods on different datasets, spanning challenging facial expression sequences\\nacross multiple identities. We also show the potential application of our\\napproach by demonstrating a cross-identity facial performance transfer\\napplication.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"64 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11951\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11951","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations
Real-time rendering of human head avatars is a cornerstone of many computer
graphics applications, such as augmented reality, video games, and films, to
name a few. Recent approaches address this challenge with computationally
efficient geometry primitives in a carefully calibrated multi-view setup.
Albeit producing photorealistic head renderings, it often fails to represent
complex motion changes such as the mouth interior and strongly varying head
poses. We propose a new method to generate highly dynamic and deformable human
head avatars from multi-view imagery in real-time. At the core of our method is
a hierarchical representation of head models that allows to capture the complex
dynamics of facial expressions and head movements. First, with rich facial
features extracted from raw input frames, we learn to deform the coarse facial
geometry of the template mesh. We then initialize 3D Gaussians on the deformed
surface and refine their positions in a fine step. We train this coarse-to-fine
facial avatar model along with the head pose as a learnable parameter in an
end-to-end framework. This enables not only controllable facial animation via
video inputs, but also high-fidelity novel view synthesis of challenging facial
expressions, such as tongue deformations and fine-grained teeth structure under
large motion changes. Moreover, it encourages the learned head avatar to
generalize towards new facial expressions and head poses at inference time. We
demonstrate the performance of our method with comparisons against the related
methods on different datasets, spanning challenging facial expression sequences
across multiple identities. We also show the potential application of our
approach by demonstrating a cross-identity facial performance transfer
application.