FACET–VLM: Facial emotion learning with text-guided multiview fusion via vision-language model for 3D/4D facial expression recognition

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Muzammil Behzad
{"title":"FACET–VLM: Facial emotion learning with text-guided multiview fusion via vision-language model for 3D/4D facial expression recognition","authors":"Muzammil Behzad","doi":"10.1016/j.neucom.2025.131621","DOIUrl":null,"url":null,"abstract":"<div><div>Facial expression recognition (FER) in 3D and 4D domains presents a significant challenge in affective computing due to the complexity of spatial and temporal facial dynamics. Its success is crucial for advancing applications in human behavior understanding, healthcare monitoring, and human-computer interaction. In this work, we propose FACET–VLM, a vision–language framework for 3D/4D FER that integrates multiview facial representation learning with semantic guidance from natural language prompts. FACET–VLM introduces three key components: Cross-View Semantic Aggregation (CVSA) for view-consistent fusion, Multiview Text-Guided Fusion (MTGF) for semantically aligned facial emotions, and a multiview consistency loss to enforce structural coherence across views. Our model achieves state-of-the-art accuracy across multiple benchmarks, including BU-3DFE, Bosphorus, BU-4DFE, and BP4D-Spontaneous. We further extend FACET–VLM to 4D micro-expression recognition (MER) on the 4DME dataset, demonstrating strong performance in capturing subtle, short-lived emotional cues. FACET–VLM achieves up to 99.41 % accuracy on BU-4DFE and outperforms prior methods by margins as high as 15.12 % in cross-dataset evaluation on BP4D. The extensive experimental results confirm the effectiveness and substantial contributions of each individual component within the framework. Overall, FACET–VLM offers a robust, extensible, and high-performing solution for multimodal FER in both posed and spontaneous settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131621"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225022933","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expression recognition (FER) in 3D and 4D domains presents a significant challenge in affective computing due to the complexity of spatial and temporal facial dynamics. Its success is crucial for advancing applications in human behavior understanding, healthcare monitoring, and human-computer interaction. In this work, we propose FACET–VLM, a vision–language framework for 3D/4D FER that integrates multiview facial representation learning with semantic guidance from natural language prompts. FACET–VLM introduces three key components: Cross-View Semantic Aggregation (CVSA) for view-consistent fusion, Multiview Text-Guided Fusion (MTGF) for semantically aligned facial emotions, and a multiview consistency loss to enforce structural coherence across views. Our model achieves state-of-the-art accuracy across multiple benchmarks, including BU-3DFE, Bosphorus, BU-4DFE, and BP4D-Spontaneous. We further extend FACET–VLM to 4D micro-expression recognition (MER) on the 4DME dataset, demonstrating strong performance in capturing subtle, short-lived emotional cues. FACET–VLM achieves up to 99.41 % accuracy on BU-4DFE and outperforms prior methods by margins as high as 15.12 % in cross-dataset evaluation on BP4D. The extensive experimental results confirm the effectiveness and substantial contributions of each individual component within the framework. Overall, FACET–VLM offers a robust, extensible, and high-performing solution for multimodal FER in both posed and spontaneous settings.
face - vlm:基于文本引导的多视角融合的面部情感学习,基于视觉语言模型,用于3D/4D面部表情识别
由于面部时空动态的复杂性,三维和四维领域的面部表情识别(FER)在情感计算中提出了重大挑战。它的成功对于推进人类行为理解、医疗监控和人机交互方面的应用至关重要。在这项工作中,我们提出了FACET-VLM,这是一种用于3D/4D FER的视觉语言框架,它将多视图面部表征学习与自然语言提示的语义指导相结合。FACET-VLM引入了三个关键组件:用于视图一致性融合的跨视图语义聚合(CVSA),用于语义对齐的面部情感的多视图文本引导融合(MTGF),以及用于强制跨视图结构一致性的多视图一致性损失。我们的模型在多个基准测试中实现了最先进的精度,包括BU-3DFE, Bosphorus, BU-4DFE和BP4D-Spontaneous。我们进一步将FACET-VLM扩展到4DME数据集上的4D微表情识别(MER),在捕捉微妙的、短暂的情绪线索方面表现出色。FACET-VLM在BU-4DFE上的准确率高达99.41%,在BP4D上的跨数据集评估中,其准确率比之前的方法高出15.12%。广泛的实验结果证实了框架内每个单独组件的有效性和实质性贡献。总的来说,FACET-VLM为多模态FER提供了一个强大的、可扩展的、高性能的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信