AniFaceDiff: Animating stylized avatars via parametric conditioned diffusion models

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ken Chen , Sachith Seneviratne , Wei Wang , Dongting Hu , Sanjay Saha , Md. Tarek Hasan , Sanka Rasnayaka , Tamasha Malepathirana , Mingming Gong , Saman Halgamuge
{"title":"AniFaceDiff: Animating stylized avatars via parametric conditioned diffusion models","authors":"Ken Chen ,&nbsp;Sachith Seneviratne ,&nbsp;Wei Wang ,&nbsp;Dongting Hu ,&nbsp;Sanjay Saha ,&nbsp;Md. Tarek Hasan ,&nbsp;Sanka Rasnayaka ,&nbsp;Tamasha Malepathirana ,&nbsp;Mingming Gong ,&nbsp;Saman Halgamuge","doi":"10.1016/j.patcog.2025.112017","DOIUrl":null,"url":null,"abstract":"<div><div>Animating stylized head avatars with dynamic poses and expressions has become an important focus in recent research due to its broad range of applications (e.g. VR/AR, film and animation, privacy protection). Previous research has made significant progress by training controllable generative models to animate the reference avatar using the target pose and expression. However, existing portrait animation methods are mostly trained using human faces, making them struggle to generalize to stylized avatar references such as cartoon, painting, and 3D-rendered avatars. Moreover, the mechanisms used to animate avatars – namely, to control the pose and expression of the reference – often inadvertently introduce unintended features – such as facial shape – from the target, while also causing a loss of intended features, like expression-related details. This paper proposes AniFaceDiff, a Stable Diffusion based method with a new conditioning module for animating stylized avatars. First, we propose a refined spatial conditioning approach by Facial Alignment to minimize identity mismatches, particularly between stylized avatars and human faces. Then, we introduce an Expression Adapter that incorporates additional cross-attention layers to address the potential loss of expression-related information. Extensive experiments demonstrate that our method achieves state-of-the-art performance, particularly in the most challenging out-of-domain stylized avatar animation, i.e., domains unseen during training. It delivers superior image quality, identity preservation, and expression accuracy. This work enhances the quality of virtual stylized avatar animation for constructive and responsible applications. To promote ethical use in virtual environments, we contribute to the advancement of face manipulation detection by evaluating state-of-the-art detectors, highlighting potential areas for improvement, and suggesting solutions.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"170 ","pages":"Article 112017"},"PeriodicalIF":7.6000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325006776","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Animating stylized head avatars with dynamic poses and expressions has become an important focus in recent research due to its broad range of applications (e.g. VR/AR, film and animation, privacy protection). Previous research has made significant progress by training controllable generative models to animate the reference avatar using the target pose and expression. However, existing portrait animation methods are mostly trained using human faces, making them struggle to generalize to stylized avatar references such as cartoon, painting, and 3D-rendered avatars. Moreover, the mechanisms used to animate avatars – namely, to control the pose and expression of the reference – often inadvertently introduce unintended features – such as facial shape – from the target, while also causing a loss of intended features, like expression-related details. This paper proposes AniFaceDiff, a Stable Diffusion based method with a new conditioning module for animating stylized avatars. First, we propose a refined spatial conditioning approach by Facial Alignment to minimize identity mismatches, particularly between stylized avatars and human faces. Then, we introduce an Expression Adapter that incorporates additional cross-attention layers to address the potential loss of expression-related information. Extensive experiments demonstrate that our method achieves state-of-the-art performance, particularly in the most challenging out-of-domain stylized avatar animation, i.e., domains unseen during training. It delivers superior image quality, identity preservation, and expression accuracy. This work enhances the quality of virtual stylized avatar animation for constructive and responsible applications. To promote ethical use in virtual environments, we contribute to the advancement of face manipulation detection by evaluating state-of-the-art detectors, highlighting potential areas for improvement, and suggesting solutions.
AniFaceDiff:通过参数化条件扩散模型动画化头像
由于其广泛的应用(例如VR/AR,电影和动画,隐私保护),具有动态姿态和表情的动画化头部化身已成为最近研究的一个重要焦点。通过训练可控生成模型,利用目标姿态和表情对参考化身进行动画化,已经取得了重大进展。然而,现有的肖像动画方法大多是使用人脸进行训练的,这使得它们很难推广到风格化的化身参考,如卡通、绘画和3d渲染的化身。此外,用于动画角色的机制——即控制参考对象的姿势和表情——通常会无意中引入目标的非预期特征(如面部形状),同时也会导致预期特征(如表情相关细节)的丢失。本文提出了一种基于稳定扩散的AniFaceDiff方法,该方法具有一个新的调节模块,用于动画化角色。首先,我们提出了一种精致的空间条件反射方法,通过面部对齐来最小化身份不匹配,特别是在程式化的化身和人脸之间。然后,我们引入了一个表达式适配器,它包含了额外的跨注意层,以解决与表达式相关的信息的潜在丢失。大量的实验表明,我们的方法达到了最先进的性能,特别是在最具挑战性的域外风格化的化身动画中,即在训练期间看不见的域。它提供了卓越的图像质量,身份保存和表达准确性。这项工作提高了虚拟风格化化身动画的质量,用于建设性和负责任的应用。为了促进虚拟环境中的道德使用,我们通过评估最先进的探测器,突出潜在的改进领域,并提出解决方案,为面部操纵检测的进步做出贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信