{"title":"AU-Guided Feature Aggregation for Micro-Expression Recognition","authors":"Xiaohui Tan, Weiqi Xu, Jiazheng Wu, Hao Geng, Qichuan Geng","doi":"10.1002/cav.70041","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Micro-expressions (MEs) are spontaneous and transient facial movements that reflect real internal emotions and have been widely applied in various fields. Recent deep learning-based methods have been rapidly developing in micro-expression recognition (MER).Still, it is typical to focus on the one-sided nature of MEs, covering only representational features or low-ranking Action Unit (AU) features. The subtle changes in MEs characterize its feature representation weak and inconspicuous, making it tough to analyze MEs only from a single piece or a small amount of information to achieve a considerable recognition effect. In addition, the lower-order information can only distinguish MEs from a single low-dimensional perspective and neglects the potential of corresponding MEs and AU combinations to each other. To address these issues, we first explore how the higher-order relations of different AU combinations correspond with MEs through statistical analysis. Afterward, based on this attribute, we propose an end-to-end multi-stream model that integrates global feature learning and local muscle movement representation guided by AU semantic information. The comparative experiments were performed on benchmark datasets, with better performance than the state-of-art methods. Also, the ablation experiments demonstrate the necessity of our model to introduce the information of AU and its relationship to MER.</p>\n </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.70041","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Micro-expressions (MEs) are spontaneous and transient facial movements that reflect real internal emotions and have been widely applied in various fields. Recent deep learning-based methods have been rapidly developing in micro-expression recognition (MER).Still, it is typical to focus on the one-sided nature of MEs, covering only representational features or low-ranking Action Unit (AU) features. The subtle changes in MEs characterize its feature representation weak and inconspicuous, making it tough to analyze MEs only from a single piece or a small amount of information to achieve a considerable recognition effect. In addition, the lower-order information can only distinguish MEs from a single low-dimensional perspective and neglects the potential of corresponding MEs and AU combinations to each other. To address these issues, we first explore how the higher-order relations of different AU combinations correspond with MEs through statistical analysis. Afterward, based on this attribute, we propose an end-to-end multi-stream model that integrates global feature learning and local muscle movement representation guided by AU semantic information. The comparative experiments were performed on benchmark datasets, with better performance than the state-of-art methods. Also, the ablation experiments demonstrate the necessity of our model to introduce the information of AU and its relationship to MER.
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.