{"title":"基于EEG-tNIRS数据的患者情绪识别深度学习模型","authors":"Mohan Raparthi , Nischay Reddy Mitta , Vinay Kumar Dunka , Sowmya Gudekota , Sandeep Pushyamitra Pattyam , Venkata Siva Prakash Nimmagadda","doi":"10.1016/j.neuri.2025.100219","DOIUrl":null,"url":null,"abstract":"<div><div>This study presents a novel approach that integrates electroencephalogram (EEG) and functional near-infrared spectroscopy (tNIRS) data to enhance emotion classification accuracy. A Modality-Attentive Multi-Channel Graph Convolution Model (MAMP-GF) is introduced, leveraging GraphSAGE-based representation learning to capture inter-channel relationships. Multi-level feature extraction techniques, including Channel Features (CF), Statistical Features (SF), and Graph Features (GF), are employed to maximize the discriminative power of EEG-tNIRS signals. To enhance modality fusion, we propose and evaluate three fusion strategies: MA-GF, MP-GF, and MA-MP-GF, which integrate graph convolutional networks with a modality attention mechanism. The model is trained and validated using EEG and tNIRS data collected from 30 subjects exposed to emotionally stimulating video clips. Experimental results demonstrate that the proposed MA-MP-GF fusion model achieves 98.77% accuracy in subject-dependent experiments, significantly outperforming traditional single-modal and other multimodal fusion methods. In cross-subject validation, the model attains a 55.53% accuracy, highlighting its robustness despite inter-subject variability. The findings illustrate that the proposed graph convolution fusion approach, combined with modality attention, effectively enhances emotion recognition accuracy and stability. This research underscores the potential of EEG-tNIRS fusion in real-time, non-invasive emotion monitoring, paving the way for advanced applications in personalized healthcare and affective computing.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100219"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning model for patient emotion recognition using EEG-tNIRS data\",\"authors\":\"Mohan Raparthi , Nischay Reddy Mitta , Vinay Kumar Dunka , Sowmya Gudekota , Sandeep Pushyamitra Pattyam , Venkata Siva Prakash Nimmagadda\",\"doi\":\"10.1016/j.neuri.2025.100219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This study presents a novel approach that integrates electroencephalogram (EEG) and functional near-infrared spectroscopy (tNIRS) data to enhance emotion classification accuracy. A Modality-Attentive Multi-Channel Graph Convolution Model (MAMP-GF) is introduced, leveraging GraphSAGE-based representation learning to capture inter-channel relationships. Multi-level feature extraction techniques, including Channel Features (CF), Statistical Features (SF), and Graph Features (GF), are employed to maximize the discriminative power of EEG-tNIRS signals. To enhance modality fusion, we propose and evaluate three fusion strategies: MA-GF, MP-GF, and MA-MP-GF, which integrate graph convolutional networks with a modality attention mechanism. The model is trained and validated using EEG and tNIRS data collected from 30 subjects exposed to emotionally stimulating video clips. Experimental results demonstrate that the proposed MA-MP-GF fusion model achieves 98.77% accuracy in subject-dependent experiments, significantly outperforming traditional single-modal and other multimodal fusion methods. In cross-subject validation, the model attains a 55.53% accuracy, highlighting its robustness despite inter-subject variability. The findings illustrate that the proposed graph convolution fusion approach, combined with modality attention, effectively enhances emotion recognition accuracy and stability. This research underscores the potential of EEG-tNIRS fusion in real-time, non-invasive emotion monitoring, paving the way for advanced applications in personalized healthcare and affective computing.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100219\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000342\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep learning model for patient emotion recognition using EEG-tNIRS data
This study presents a novel approach that integrates electroencephalogram (EEG) and functional near-infrared spectroscopy (tNIRS) data to enhance emotion classification accuracy. A Modality-Attentive Multi-Channel Graph Convolution Model (MAMP-GF) is introduced, leveraging GraphSAGE-based representation learning to capture inter-channel relationships. Multi-level feature extraction techniques, including Channel Features (CF), Statistical Features (SF), and Graph Features (GF), are employed to maximize the discriminative power of EEG-tNIRS signals. To enhance modality fusion, we propose and evaluate three fusion strategies: MA-GF, MP-GF, and MA-MP-GF, which integrate graph convolutional networks with a modality attention mechanism. The model is trained and validated using EEG and tNIRS data collected from 30 subjects exposed to emotionally stimulating video clips. Experimental results demonstrate that the proposed MA-MP-GF fusion model achieves 98.77% accuracy in subject-dependent experiments, significantly outperforming traditional single-modal and other multimodal fusion methods. In cross-subject validation, the model attains a 55.53% accuracy, highlighting its robustness despite inter-subject variability. The findings illustrate that the proposed graph convolution fusion approach, combined with modality attention, effectively enhances emotion recognition accuracy and stability. This research underscores the potential of EEG-tNIRS fusion in real-time, non-invasive emotion monitoring, paving the way for advanced applications in personalized healthcare and affective computing.
Neuroscience informaticsSurgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology