{"title":"基于异构双分支情绪一致性网络的面部表情识别","authors":"Shasha Mao;Yuanyuan Zhang;Dandan Yan;Puhua Chen","doi":"10.1109/LSP.2024.3505798","DOIUrl":null,"url":null,"abstract":"Due to labeling subjectivity, label noises have become a critical issue that is addressed in facial expression recognition. From the view of human visual perception, the facial exhibited emotion characteristic should be unaltered corresponding to its truth expression, rather than the noise label, whereas most methods ignore the emotion consistency during FER, especially from different networks. Based on this, we propose a new FER method based heterogeneous dual-branch emotional consistency constrains, to prevent the model from memorizing noise samples based on features associated with noisy labels. In the proposed method, the emotion consistency from spatial transformation and heterogeneous networks are simultaneously considered to guide the model to perceive the overall visual features of expressions. Meanwhile, the confidence of the given label is evaluated based on emotional attention maps of original and transformed images, which effectively enhances the classification reliability of two branches to alleviate the negative effect of noisy labels in the learning process. Additionally, the weighted ensemble strategy is used to unify two branches. Experimental results illustrate that the proposed method achieves better performance than the state-of-the-art methods for 10%, 20% and 30% label noises.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"566-570"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Heterogeneous Dual-Branch Emotional Consistency Network for Facial Expression Recognition\",\"authors\":\"Shasha Mao;Yuanyuan Zhang;Dandan Yan;Puhua Chen\",\"doi\":\"10.1109/LSP.2024.3505798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to labeling subjectivity, label noises have become a critical issue that is addressed in facial expression recognition. From the view of human visual perception, the facial exhibited emotion characteristic should be unaltered corresponding to its truth expression, rather than the noise label, whereas most methods ignore the emotion consistency during FER, especially from different networks. Based on this, we propose a new FER method based heterogeneous dual-branch emotional consistency constrains, to prevent the model from memorizing noise samples based on features associated with noisy labels. In the proposed method, the emotion consistency from spatial transformation and heterogeneous networks are simultaneously considered to guide the model to perceive the overall visual features of expressions. Meanwhile, the confidence of the given label is evaluated based on emotional attention maps of original and transformed images, which effectively enhances the classification reliability of two branches to alleviate the negative effect of noisy labels in the learning process. Additionally, the weighted ensemble strategy is used to unify two branches. Experimental results illustrate that the proposed method achieves better performance than the state-of-the-art methods for 10%, 20% and 30% label noises.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"32 \",\"pages\":\"566-570\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10845008/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10845008/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Heterogeneous Dual-Branch Emotional Consistency Network for Facial Expression Recognition
Due to labeling subjectivity, label noises have become a critical issue that is addressed in facial expression recognition. From the view of human visual perception, the facial exhibited emotion characteristic should be unaltered corresponding to its truth expression, rather than the noise label, whereas most methods ignore the emotion consistency during FER, especially from different networks. Based on this, we propose a new FER method based heterogeneous dual-branch emotional consistency constrains, to prevent the model from memorizing noise samples based on features associated with noisy labels. In the proposed method, the emotion consistency from spatial transformation and heterogeneous networks are simultaneously considered to guide the model to perceive the overall visual features of expressions. Meanwhile, the confidence of the given label is evaluated based on emotional attention maps of original and transformed images, which effectively enhances the classification reliability of two branches to alleviate the negative effect of noisy labels in the learning process. Additionally, the weighted ensemble strategy is used to unify two branches. Experimental results illustrate that the proposed method achieves better performance than the state-of-the-art methods for 10%, 20% and 30% label noises.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.