{"title":"High dynamic range preprocessing, ParallelAttention Transformer and CoExpression analysis for facial expression recognition","authors":"Yuntao Zhou , Thiyaporn Kantathanawat , Somkiat Tuntiwongwanich , Chunmao Liu","doi":"10.1016/j.compeleceng.2025.110110","DOIUrl":null,"url":null,"abstract":"<div><div>Facial expression recognition (FER) aims to enable computers to automatically detect and recognize human facial expressions, thereby understanding their emotional states. Despite significant technological advancements in recent years, FER tasks still face several challenges, including expression diversity, individual differences, and the impact of lighting and detail variations on recognition accuracy. To address these challenges, a high-performance FER model is proposed that comprises three key components: High Dynamic Range (HDR) Preprocessing Module, ParallelAttention VisionTransformer structure, and CoExpression Head. In the preprocessing stage, the HDR Preprocessing Module optimizes input images through local contrast and detail enhancement techniques, improving the model’s adaptability to lighting and detail variations. During the feature processing stage, the ParallelAttention VisionTransformer structure employs a multi-head self-attention mechanism encoder to effectively capture and process facial expression features at various scales, allowing for a detailed understanding of subtle facial expression differences. Finally, the CoExpression Head utilizes a collaborative expression mechanism to efficiently handle and refine features across different expression states during the feature integration process. Combining these three stages significantly enhances the accuracy of facial expression recognition. Extensive experimental evaluations on public datasets, RAF-DB and AffectNet, demonstrate that the model achieves accuracy rates of 92.11%, 67.25%, and 63.40% on RAF-DB, AffectNet, and AffectNet-8, respectively, exhibiting outstanding performance comparable to other state-of-the-art models.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110110"},"PeriodicalIF":4.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625000539","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Facial expression recognition (FER) aims to enable computers to automatically detect and recognize human facial expressions, thereby understanding their emotional states. Despite significant technological advancements in recent years, FER tasks still face several challenges, including expression diversity, individual differences, and the impact of lighting and detail variations on recognition accuracy. To address these challenges, a high-performance FER model is proposed that comprises three key components: High Dynamic Range (HDR) Preprocessing Module, ParallelAttention VisionTransformer structure, and CoExpression Head. In the preprocessing stage, the HDR Preprocessing Module optimizes input images through local contrast and detail enhancement techniques, improving the model’s adaptability to lighting and detail variations. During the feature processing stage, the ParallelAttention VisionTransformer structure employs a multi-head self-attention mechanism encoder to effectively capture and process facial expression features at various scales, allowing for a detailed understanding of subtle facial expression differences. Finally, the CoExpression Head utilizes a collaborative expression mechanism to efficiently handle and refine features across different expression states during the feature integration process. Combining these three stages significantly enhances the accuracy of facial expression recognition. Extensive experimental evaluations on public datasets, RAF-DB and AffectNet, demonstrate that the model achieves accuracy rates of 92.11%, 67.25%, and 63.40% on RAF-DB, AffectNet, and AffectNet-8, respectively, exhibiting outstanding performance comparable to other state-of-the-art models.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.