{"title":"Enhancing emotion recognition through multi-modal data fusion and graph neural networks","authors":"Kasthuri Devarajan , Suresh Ponnan , Sundresan Perumal","doi":"10.1016/j.ibmed.2025.100291","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, a novel emotion detection system is proposed based on Graph Neural Network (GNN) architecture, which is used to integrate and learn from multiple data sets (EEG, face expression, physiological signals). The proposed GNN is able to learn about interactions between multiple modalities, so as to extract a single picture of emotion categorization. This model is very good and gets 91.25 % accuracy, 91.26 % precision, 91.25 % recall and 91.25 % F1-score. Moreover, the proposed GNN is a sensible trade-off between speed and precision, with a calculation time of 163 ms. The Proposed GNN is better, primarily due to its ability to represent complex relations between multi-modal inputs, thereby improving its real-time emotional state recognition and classification performance. The proposed GNN demonstrates its suitability for powerful emotion detection by outperforming all models in classification precision and multi-modal data fusion, surpassing traditional models such as SVM, KNN, CCA, CNN, and RNN. The Proposed GNN consistently proves to be the most accurate and robust solution, having been the most dominant technique in emotion detection, despite CNN and RNN achieving slightly better results.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100291"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266652122500095X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, a novel emotion detection system is proposed based on Graph Neural Network (GNN) architecture, which is used to integrate and learn from multiple data sets (EEG, face expression, physiological signals). The proposed GNN is able to learn about interactions between multiple modalities, so as to extract a single picture of emotion categorization. This model is very good and gets 91.25 % accuracy, 91.26 % precision, 91.25 % recall and 91.25 % F1-score. Moreover, the proposed GNN is a sensible trade-off between speed and precision, with a calculation time of 163 ms. The Proposed GNN is better, primarily due to its ability to represent complex relations between multi-modal inputs, thereby improving its real-time emotional state recognition and classification performance. The proposed GNN demonstrates its suitability for powerful emotion detection by outperforming all models in classification precision and multi-modal data fusion, surpassing traditional models such as SVM, KNN, CCA, CNN, and RNN. The Proposed GNN consistently proves to be the most accurate and robust solution, having been the most dominant technique in emotion detection, despite CNN and RNN achieving slightly better results.