Graph-Driven Micro-Expression Rendering with Emotionally Diverse Expressions for Lifelike Digital Humans.

IF 3.9 3区 医学 Q1 ENGINEERING, MULTIDISCIPLINARY
Lei Fang, Fan Yang, Yichen Lin, Jing Zhang, Mincheol Whang
{"title":"Graph-Driven Micro-Expression Rendering with Emotionally Diverse Expressions for Lifelike Digital Humans.","authors":"Lei Fang, Fan Yang, Yichen Lin, Jing Zhang, Mincheol Whang","doi":"10.3390/biomimetics10090587","DOIUrl":null,"url":null,"abstract":"<p><p>Micro-expressions, characterized by brief and subtle facial muscle movements, are essential for conveying nuanced emotions in digital humans, yet existing rendering techniques often produce rigid or emotionally monotonous animations due to the inadequate modeling of temporal dynamics and action unit interdependencies. This paper proposes a graph-driven framework for micro-expression rendering that generates emotionally diverse and lifelike expressions. We employ a 3D-ResNet-18 backbone network to perform joint spatio-temporal feature extraction from facial video sequences, enhancing sensitivity to transient motion cues. Action units (AUs) are modeled as nodes in a symmetric graph, with edge weights derived from empirical co-occurrence probabilities and processed via a graph convolutional network to capture structural dependencies and symmetric interactions. This symmetry is justified by the inherent bilateral nature of human facial anatomy, where AU relationships are based on co-occurrence and facial anatomy analysis (as per the FACS), which are typically undirected and symmetric. Human faces are symmetric, and such relationships align with the design of classic spectral GCNs for undirected graphs, assuming that adjacency matrices are symmetric to model non-directional co-occurrences effectively. Predicted AU activations and timestamps are interpolated into continuous motion curves using B-spline functions and mapped to skeletal controls within a real-time animation pipeline (Unreal Engine). Experiments on the CASME II dataset demonstrate superior performance, achieving an F1-score of 77.93% and an accuracy of 84.80% (<i>k</i>-fold cross-validation, k = 5), outperforming baselines in temporal segmentation. Subjective evaluations confirm that the rendered digital human exhibits improvements in perceptual clarity, naturalness, and realism. This approach bridges micro-expression recognition and high-fidelity facial animation, enabling more expressive virtual interactions through curve extraction from AU values and timestamps.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 9","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467307/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10090587","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Micro-expressions, characterized by brief and subtle facial muscle movements, are essential for conveying nuanced emotions in digital humans, yet existing rendering techniques often produce rigid or emotionally monotonous animations due to the inadequate modeling of temporal dynamics and action unit interdependencies. This paper proposes a graph-driven framework for micro-expression rendering that generates emotionally diverse and lifelike expressions. We employ a 3D-ResNet-18 backbone network to perform joint spatio-temporal feature extraction from facial video sequences, enhancing sensitivity to transient motion cues. Action units (AUs) are modeled as nodes in a symmetric graph, with edge weights derived from empirical co-occurrence probabilities and processed via a graph convolutional network to capture structural dependencies and symmetric interactions. This symmetry is justified by the inherent bilateral nature of human facial anatomy, where AU relationships are based on co-occurrence and facial anatomy analysis (as per the FACS), which are typically undirected and symmetric. Human faces are symmetric, and such relationships align with the design of classic spectral GCNs for undirected graphs, assuming that adjacency matrices are symmetric to model non-directional co-occurrences effectively. Predicted AU activations and timestamps are interpolated into continuous motion curves using B-spline functions and mapped to skeletal controls within a real-time animation pipeline (Unreal Engine). Experiments on the CASME II dataset demonstrate superior performance, achieving an F1-score of 77.93% and an accuracy of 84.80% (k-fold cross-validation, k = 5), outperforming baselines in temporal segmentation. Subjective evaluations confirm that the rendered digital human exhibits improvements in perceptual clarity, naturalness, and realism. This approach bridges micro-expression recognition and high-fidelity facial animation, enabling more expressive virtual interactions through curve extraction from AU values and timestamps.

面向逼真数字人的图形驱动微表情渲染。
以短暂而微妙的面部肌肉运动为特征的微表情,对于传达数字人类微妙的情感至关重要,然而,由于时间动态和动作单元相互依赖性的建模不足,现有的渲染技术经常产生僵硬或情感单调的动画。本文提出了一种图形驱动的微表情渲染框架,可以生成情感多样化和逼真的表情。我们使用3D-ResNet-18骨干网从面部视频序列中执行联合时空特征提取,提高对瞬态运动线索的灵敏度。动作单元(au)被建模为对称图中的节点,其边缘权重来源于经验共现概率,并通过图卷积网络进行处理,以捕获结构依赖关系和对称交互。这种对称性是由人类面部解剖固有的双边性质所证明的,其中AU关系是基于共发生和面部解剖分析(根据FACS),这通常是无向和对称的。人脸是对称的,这种关系与无向图的经典谱GCNs的设计一致,假设邻接矩阵是对称的,可以有效地模拟非方向共现。预测的AU激活和时间戳使用b样条函数内插到连续的运动曲线中,并映射到实时动画管道(虚幻引擎)中的骨架控件。在CASME II数据集上的实验表明,在时间分割方面,该方法的f1得分为77.93%,准确率为84.80% (k-fold交叉验证,k = 5),优于基线。主观评价证实,呈现的数字人在感知清晰度、自然性和真实感方面有所改善。这种方法连接了微表情识别和高保真面部动画,通过从AU值和时间戳中提取曲线,实现更具表现力的虚拟交互。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomimetics
Biomimetics Biochemistry, Genetics and Molecular Biology-Biotechnology
CiteScore
3.50
自引率
11.10%
发文量
189
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信