Micro-expression spotting based on multi-modal hierarchical semantic guided deep fusion and optical flow driven feature integration

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Haolin Chang, Zhihua Xie, Fan Yang
{"title":"Micro-expression spotting based on multi-modal hierarchical semantic guided deep fusion and optical flow driven feature integration","authors":"Haolin Chang, Zhihua Xie, Fan Yang","doi":"10.1007/s40747-025-01855-3","DOIUrl":null,"url":null,"abstract":"<p>Micro-expression (ME), as an involuntary and brief facial expression, holds significant potential applications in fields such as political psychology, lie detection, law enforcement, and healthcare. Most existing micro-expression spotting (MES) methods predominantly learn from optical flow features while neglecting the detailed information contained in RGB images. To address this issue, this paper proposes a multi-scale hierarchical semantic-guided end-to-end multimodal fusion framework based on Convolutional Neural Network (CNN)-Transformer for MES, named MESFusion. Specifically, to obtain cross-modal complementary information, this scheme sequentially constructs a Multi-Scale Feature Extraction Module (MFEM) and a Multi-scale hierarchical Semantic-Guided Fusion Module (MSGFM). By introducing an Optical Flow-Driven fusion feature Integration Module (OF-DIM), the correlation of non-scale fusion features is modeled in the channel dimension. Moreover, guided by the optical flow motion information, this approach can adaptively focus on facial motion areas and filter out interference information in cross-modal fusion. Extensive experiments conducted on the CAS(ME)<sup>2</sup> dataset and the SAMM Long Videos dataset demonstrate that the MESFusion model surpasses competitive baselines and achieves new state-of-the-art results.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"39 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01855-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Micro-expression (ME), as an involuntary and brief facial expression, holds significant potential applications in fields such as political psychology, lie detection, law enforcement, and healthcare. Most existing micro-expression spotting (MES) methods predominantly learn from optical flow features while neglecting the detailed information contained in RGB images. To address this issue, this paper proposes a multi-scale hierarchical semantic-guided end-to-end multimodal fusion framework based on Convolutional Neural Network (CNN)-Transformer for MES, named MESFusion. Specifically, to obtain cross-modal complementary information, this scheme sequentially constructs a Multi-Scale Feature Extraction Module (MFEM) and a Multi-scale hierarchical Semantic-Guided Fusion Module (MSGFM). By introducing an Optical Flow-Driven fusion feature Integration Module (OF-DIM), the correlation of non-scale fusion features is modeled in the channel dimension. Moreover, guided by the optical flow motion information, this approach can adaptively focus on facial motion areas and filter out interference information in cross-modal fusion. Extensive experiments conducted on the CAS(ME)2 dataset and the SAMM Long Videos dataset demonstrate that the MESFusion model surpasses competitive baselines and achieves new state-of-the-art results.

基于多模态分层语义引导深度融合和光流驱动特征集成的微表情识别
微表情(Micro-expression, ME)作为一种不自觉的、短暂的面部表情,在政治心理学、测谎、执法和医疗保健等领域有着重要的潜在应用。现有的微表情识别方法主要是从光流特征中学习,而忽略了RGB图像中包含的详细信息。为了解决这一问题,本文提出了一种基于卷积神经网络(CNN)-Transformer的多尺度分层语义引导的端到端多模态融合框架,命名为MESFusion。具体而言,该方案依次构建了多尺度特征提取模块(MFEM)和多尺度分层语义引导融合模块(MSGFM),以获取跨模态互补信息。通过引入光流驱动融合特征集成模块(of - dim),在通道维度上对非尺度融合特征的相关性进行建模。此外,该方法在光流运动信息的引导下,可以自适应地聚焦面部运动区域,并滤除跨模态融合中的干扰信息。在CAS(ME)2数据集和SAMM长视频数据集上进行的大量实验表明,MESFusion模型超越了竞争基准,并取得了新的最先进的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信