{"title":"ARMNet:基于情感区域提取和多通道融合的图像维度情感预测网络。","authors":"Jingjing Zhang, Jiaying Sun, Chunxiao Wang, Zui Tao, Fuxiao Zhang","doi":"10.3390/s24217099","DOIUrl":null,"url":null,"abstract":"<p><p>Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as possible. However, the existing methods mainly focus on emotion recognition by extracting the emotional regions where salient objects are located while ignoring the joint influence of objects and background on emotion. Furthermore, in the existing literature, when fusing multi-level features, no consideration has been given to the varying contributions of features from different levels to emotional analysis, which makes it difficult to distinguish valuable and useless features and cannot improve the utilization of effective features. This paper proposes an image emotion prediction network named ARMNet. In ARMNet, a unified affective region extraction method that integrates eye fixation detection and attention detection is proposed to enhance the combined influence of objects and backgrounds. Additionally, the multi-level features are fused with the consideration of their different contributions through an improved channel attention mechanism. In comparison to the existing methods, experiments conducted on the CGnA10766 dataset demonstrate that the performance of valence and arousal, as measured by Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²), has improved by 4.74%, 3.53%, 3.62%, 1.93%, 6.29%, and 7.23%, respectively. Furthermore, the interpretability of the network is enhanced through the visualization of attention weights corresponding to emotional regions within the images.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"24 21","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11548427/pdf/","citationCount":"0","resultStr":"{\"title\":\"ARMNet: A Network for Image Dimensional Emotion Prediction Based on Affective Region Extraction and Multi-Channel Fusion.\",\"authors\":\"Jingjing Zhang, Jiaying Sun, Chunxiao Wang, Zui Tao, Fuxiao Zhang\",\"doi\":\"10.3390/s24217099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as possible. However, the existing methods mainly focus on emotion recognition by extracting the emotional regions where salient objects are located while ignoring the joint influence of objects and background on emotion. Furthermore, in the existing literature, when fusing multi-level features, no consideration has been given to the varying contributions of features from different levels to emotional analysis, which makes it difficult to distinguish valuable and useless features and cannot improve the utilization of effective features. This paper proposes an image emotion prediction network named ARMNet. In ARMNet, a unified affective region extraction method that integrates eye fixation detection and attention detection is proposed to enhance the combined influence of objects and backgrounds. Additionally, the multi-level features are fused with the consideration of their different contributions through an improved channel attention mechanism. In comparison to the existing methods, experiments conducted on the CGnA10766 dataset demonstrate that the performance of valence and arousal, as measured by Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²), has improved by 4.74%, 3.53%, 3.62%, 1.93%, 6.29%, and 7.23%, respectively. Furthermore, the interpretability of the network is enhanced through the visualization of attention weights corresponding to emotional regions within the images.</p>\",\"PeriodicalId\":21698,\"journal\":{\"name\":\"Sensors\",\"volume\":\"24 21\",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11548427/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sensors\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.3390/s24217099\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"CHEMISTRY, ANALYTICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sensors","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.3390/s24217099","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, ANALYTICAL","Score":null,"Total":0}
ARMNet: A Network for Image Dimensional Emotion Prediction Based on Affective Region Extraction and Multi-Channel Fusion.
Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as possible. However, the existing methods mainly focus on emotion recognition by extracting the emotional regions where salient objects are located while ignoring the joint influence of objects and background on emotion. Furthermore, in the existing literature, when fusing multi-level features, no consideration has been given to the varying contributions of features from different levels to emotional analysis, which makes it difficult to distinguish valuable and useless features and cannot improve the utilization of effective features. This paper proposes an image emotion prediction network named ARMNet. In ARMNet, a unified affective region extraction method that integrates eye fixation detection and attention detection is proposed to enhance the combined influence of objects and backgrounds. Additionally, the multi-level features are fused with the consideration of their different contributions through an improved channel attention mechanism. In comparison to the existing methods, experiments conducted on the CGnA10766 dataset demonstrate that the performance of valence and arousal, as measured by Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²), has improved by 4.74%, 3.53%, 3.62%, 1.93%, 6.29%, and 7.23%, respectively. Furthermore, the interpretability of the network is enhanced through the visualization of attention weights corresponding to emotional regions within the images.
期刊介绍:
Sensors (ISSN 1424-8220) provides an advanced forum for the science and technology of sensors and biosensors. It publishes reviews (including comprehensive reviews on the complete sensors products), regular research papers and short notes. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.