Ahmed Omar Alzahrani, Ahmed Mohammed Alghamdi, M Usman Ashraf, Iqra Ilyas, Nadeem Sarwar, Abdulrahman Alzahrani, Alaa Abdul Salam Alarood
{"title":"基于深度学习的动态跨域双注意网络的面部表情识别框架。","authors":"Ahmed Omar Alzahrani, Ahmed Mohammed Alghamdi, M Usman Ashraf, Iqra Ilyas, Nadeem Sarwar, Abdulrahman Alzahrani, Alaa Abdul Salam Alarood","doi":"10.7717/peerj-cs.2866","DOIUrl":null,"url":null,"abstract":"<p><p>Variations in domain targets have recently posed significant challenges for facial expression recognition tasks, primarily due to domain shifts. Current methods focus largely on global feature adoption to achieve domain-invariant learning; however, transferring local features across diverse domains remains an ongoing challenge. Additionally, during training on target datasets, these methods often suffer from reduced feature representation in the target domain due to insufficient discriminative supervision. To tackle these challenges, we propose a dynamic cross-domain dual attention network for facial expression recognition. Our model is specifically designed to learn domain-invariant features through separate modules for global and local adversarial learning. We also introduce a semantic-aware module to generate pseudo-labels, which computes semantic labels from both global and local features. We assess our model's effectiveness through extensive experiments on the Real-world Affective Faces Database (RAF-DB), FER-PLUS, AffectNet, Expression in the Wild (ExpW), SFEW 2.0, and Japanese Female Facial Expression (JAFFE) datasets. The results demonstrate that our scheme outperforms the existing state-of-the-art methods by attaining recognition accuracies 93.18, 92.35, 82.13, 78.37, 72.47, 70.68 respectively.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2866"},"PeriodicalIF":3.5000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192681/pdf/","citationCount":"0","resultStr":"{\"title\":\"A novel facial expression recognition framework using deep learning based dynamic cross-domain dual attention network.\",\"authors\":\"Ahmed Omar Alzahrani, Ahmed Mohammed Alghamdi, M Usman Ashraf, Iqra Ilyas, Nadeem Sarwar, Abdulrahman Alzahrani, Alaa Abdul Salam Alarood\",\"doi\":\"10.7717/peerj-cs.2866\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Variations in domain targets have recently posed significant challenges for facial expression recognition tasks, primarily due to domain shifts. Current methods focus largely on global feature adoption to achieve domain-invariant learning; however, transferring local features across diverse domains remains an ongoing challenge. Additionally, during training on target datasets, these methods often suffer from reduced feature representation in the target domain due to insufficient discriminative supervision. To tackle these challenges, we propose a dynamic cross-domain dual attention network for facial expression recognition. Our model is specifically designed to learn domain-invariant features through separate modules for global and local adversarial learning. We also introduce a semantic-aware module to generate pseudo-labels, which computes semantic labels from both global and local features. We assess our model's effectiveness through extensive experiments on the Real-world Affective Faces Database (RAF-DB), FER-PLUS, AffectNet, Expression in the Wild (ExpW), SFEW 2.0, and Japanese Female Facial Expression (JAFFE) datasets. The results demonstrate that our scheme outperforms the existing state-of-the-art methods by attaining recognition accuracies 93.18, 92.35, 82.13, 78.37, 72.47, 70.68 respectively.</p>\",\"PeriodicalId\":54224,\"journal\":{\"name\":\"PeerJ Computer Science\",\"volume\":\"11 \",\"pages\":\"e2866\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192681/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PeerJ Computer Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.7717/peerj-cs.2866\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2866","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A novel facial expression recognition framework using deep learning based dynamic cross-domain dual attention network.
Variations in domain targets have recently posed significant challenges for facial expression recognition tasks, primarily due to domain shifts. Current methods focus largely on global feature adoption to achieve domain-invariant learning; however, transferring local features across diverse domains remains an ongoing challenge. Additionally, during training on target datasets, these methods often suffer from reduced feature representation in the target domain due to insufficient discriminative supervision. To tackle these challenges, we propose a dynamic cross-domain dual attention network for facial expression recognition. Our model is specifically designed to learn domain-invariant features through separate modules for global and local adversarial learning. We also introduce a semantic-aware module to generate pseudo-labels, which computes semantic labels from both global and local features. We assess our model's effectiveness through extensive experiments on the Real-world Affective Faces Database (RAF-DB), FER-PLUS, AffectNet, Expression in the Wild (ExpW), SFEW 2.0, and Japanese Female Facial Expression (JAFFE) datasets. The results demonstrate that our scheme outperforms the existing state-of-the-art methods by attaining recognition accuracies 93.18, 92.35, 82.13, 78.37, 72.47, 70.68 respectively.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.