{"title":"Micro-expression spotting based on multi-modal hierarchical semantic guided deep fusion and optical flow driven feature integration","authors":"Haolin Chang, Zhihua Xie, Fan Yang","doi":"10.1007/s40747-025-01855-3","DOIUrl":null,"url":null,"abstract":"<p>Micro-expression (ME), as an involuntary and brief facial expression, holds significant potential applications in fields such as political psychology, lie detection, law enforcement, and healthcare. Most existing micro-expression spotting (MES) methods predominantly learn from optical flow features while neglecting the detailed information contained in RGB images. To address this issue, this paper proposes a multi-scale hierarchical semantic-guided end-to-end multimodal fusion framework based on Convolutional Neural Network (CNN)-Transformer for MES, named MESFusion. Specifically, to obtain cross-modal complementary information, this scheme sequentially constructs a Multi-Scale Feature Extraction Module (MFEM) and a Multi-scale hierarchical Semantic-Guided Fusion Module (MSGFM). By introducing an Optical Flow-Driven fusion feature Integration Module (OF-DIM), the correlation of non-scale fusion features is modeled in the channel dimension. Moreover, guided by the optical flow motion information, this approach can adaptively focus on facial motion areas and filter out interference information in cross-modal fusion. Extensive experiments conducted on the CAS(ME)<sup>2</sup> dataset and the SAMM Long Videos dataset demonstrate that the MESFusion model surpasses competitive baselines and achieves new state-of-the-art results.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"39 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01855-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Micro-expression (ME), as an involuntary and brief facial expression, holds significant potential applications in fields such as political psychology, lie detection, law enforcement, and healthcare. Most existing micro-expression spotting (MES) methods predominantly learn from optical flow features while neglecting the detailed information contained in RGB images. To address this issue, this paper proposes a multi-scale hierarchical semantic-guided end-to-end multimodal fusion framework based on Convolutional Neural Network (CNN)-Transformer for MES, named MESFusion. Specifically, to obtain cross-modal complementary information, this scheme sequentially constructs a Multi-Scale Feature Extraction Module (MFEM) and a Multi-scale hierarchical Semantic-Guided Fusion Module (MSGFM). By introducing an Optical Flow-Driven fusion feature Integration Module (OF-DIM), the correlation of non-scale fusion features is modeled in the channel dimension. Moreover, guided by the optical flow motion information, this approach can adaptively focus on facial motion areas and filter out interference information in cross-modal fusion. Extensive experiments conducted on the CAS(ME)2 dataset and the SAMM Long Videos dataset demonstrate that the MESFusion model surpasses competitive baselines and achieves new state-of-the-art results.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.