Deep learning model for patient emotion recognition using EEG-tNIRS data

Mohan Raparthi , Nischay Reddy Mitta , Vinay Kumar Dunka , Sowmya Gudekota , Sandeep Pushyamitra Pattyam , Venkata Siva Prakash Nimmagadda
{"title":"Deep learning model for patient emotion recognition using EEG-tNIRS data","authors":"Mohan Raparthi ,&nbsp;Nischay Reddy Mitta ,&nbsp;Vinay Kumar Dunka ,&nbsp;Sowmya Gudekota ,&nbsp;Sandeep Pushyamitra Pattyam ,&nbsp;Venkata Siva Prakash Nimmagadda","doi":"10.1016/j.neuri.2025.100219","DOIUrl":null,"url":null,"abstract":"<div><div>This study presents a novel approach that integrates electroencephalogram (EEG) and functional near-infrared spectroscopy (tNIRS) data to enhance emotion classification accuracy. A Modality-Attentive Multi-Channel Graph Convolution Model (MAMP-GF) is introduced, leveraging GraphSAGE-based representation learning to capture inter-channel relationships. Multi-level feature extraction techniques, including Channel Features (CF), Statistical Features (SF), and Graph Features (GF), are employed to maximize the discriminative power of EEG-tNIRS signals. To enhance modality fusion, we propose and evaluate three fusion strategies: MA-GF, MP-GF, and MA-MP-GF, which integrate graph convolutional networks with a modality attention mechanism. The model is trained and validated using EEG and tNIRS data collected from 30 subjects exposed to emotionally stimulating video clips. Experimental results demonstrate that the proposed MA-MP-GF fusion model achieves 98.77% accuracy in subject-dependent experiments, significantly outperforming traditional single-modal and other multimodal fusion methods. In cross-subject validation, the model attains a 55.53% accuracy, highlighting its robustness despite inter-subject variability. The findings illustrate that the proposed graph convolution fusion approach, combined with modality attention, effectively enhances emotion recognition accuracy and stability. This research underscores the potential of EEG-tNIRS fusion in real-time, non-invasive emotion monitoring, paving the way for advanced applications in personalized healthcare and affective computing.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100219"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study presents a novel approach that integrates electroencephalogram (EEG) and functional near-infrared spectroscopy (tNIRS) data to enhance emotion classification accuracy. A Modality-Attentive Multi-Channel Graph Convolution Model (MAMP-GF) is introduced, leveraging GraphSAGE-based representation learning to capture inter-channel relationships. Multi-level feature extraction techniques, including Channel Features (CF), Statistical Features (SF), and Graph Features (GF), are employed to maximize the discriminative power of EEG-tNIRS signals. To enhance modality fusion, we propose and evaluate three fusion strategies: MA-GF, MP-GF, and MA-MP-GF, which integrate graph convolutional networks with a modality attention mechanism. The model is trained and validated using EEG and tNIRS data collected from 30 subjects exposed to emotionally stimulating video clips. Experimental results demonstrate that the proposed MA-MP-GF fusion model achieves 98.77% accuracy in subject-dependent experiments, significantly outperforming traditional single-modal and other multimodal fusion methods. In cross-subject validation, the model attains a 55.53% accuracy, highlighting its robustness despite inter-subject variability. The findings illustrate that the proposed graph convolution fusion approach, combined with modality attention, effectively enhances emotion recognition accuracy and stability. This research underscores the potential of EEG-tNIRS fusion in real-time, non-invasive emotion monitoring, paving the way for advanced applications in personalized healthcare and affective computing.
基于EEG-tNIRS数据的患者情绪识别深度学习模型
本研究提出了一种结合脑电图(EEG)和功能近红外光谱(tNIRS)数据的新方法,以提高情绪分类的准确性。引入了一种模态关注的多通道图卷积模型(MAMP-GF),利用基于graphsage的表示学习来捕获通道间关系。采用通道特征(CF)、统计特征(SF)和图特征(GF)等多层次特征提取技术,最大限度地提高了EEG-tNIRS信号的判别能力。为了增强模态融合,我们提出并评估了三种融合策略:MA-GF、MP-GF和MA-MP-GF,它们将图卷积网络与模态注意机制相结合。该模型是通过从30名观看情绪刺激视频片段的受试者中收集的EEG和tnir数据进行训练和验证的。实验结果表明,MA-MP-GF融合模型在主体相关实验中准确率达到98.77%,显著优于传统的单模态和其他多模态融合方法。在跨主题验证中,该模型达到55.53%的准确率,突出了其鲁棒性,尽管存在不同主题的差异。研究结果表明,本文提出的图卷积融合方法与模态关注相结合,有效地提高了情感识别的准确性和稳定性。这项研究强调了EEG-tNIRS融合在实时、非侵入性情绪监测中的潜力,为个性化医疗保健和情感计算的高级应用铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信