Deep learning-based multi-brain capsule network for Next-Gen Clinical Emotion recognition using EEG signals

Ritu Dahiya , Mamatha G , Shila Sumol Jawale , Santanu Das , Sagar Choudhary , Vinod Motiram Rathod , Bhawna Janghel Rajput
{"title":"Deep learning-based multi-brain capsule network for Next-Gen Clinical Emotion recognition using EEG signals","authors":"Ritu Dahiya ,&nbsp;Mamatha G ,&nbsp;Shila Sumol Jawale ,&nbsp;Santanu Das ,&nbsp;Sagar Choudhary ,&nbsp;Vinod Motiram Rathod ,&nbsp;Bhawna Janghel Rajput","doi":"10.1016/j.neuri.2025.100203","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning techniques are crucial for next-generation clinical applications, particularly in Next-Gen Clinical Emotion recognition. To enhance classification accuracy, we propose an Attention mechanism based Capsule Network Model (At-CapNet) for Multi-Brain Region. EEG-tNIRS signals were collected using Next-Gen Clinical Emotion-inducing visual stimuli to construct the TYUT3.0 dataset, from which EEG and tNIRS features were extracted and mapped into matrices. A multi-brain region attention mechanism was applied to integrate EEG and tNIRS features, assigning different weights to features from distinct brain regions to obtain high-quality primary capsules. Additionally, a capsule network module was introduced to optimize the number of capsules entering the dynamic routing mechanism, improving computational efficiency. Experimental validation on the TYUT3.0 Next-Gen Clinical Emotion dataset demonstrates that integrating EEG and tNIRS improves recognition accuracy by 1.53% and 14.35% compared to single-modality signals. Moreover, the At-CapNet model achieves an average accuracy improvement of 4.98% over the original CapsNet model and outperforms existing CapsNet-based Next-Gen Clinical Emotion recognition models by 1% to 5%. This research contributes to the advancement of non-invasive neurotechnology for precise Next-Gen Clinical Emotion recognition, with potential implications for next-generation clinical diagnostics and interventions.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 2","pages":"Article 100203"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning techniques are crucial for next-generation clinical applications, particularly in Next-Gen Clinical Emotion recognition. To enhance classification accuracy, we propose an Attention mechanism based Capsule Network Model (At-CapNet) for Multi-Brain Region. EEG-tNIRS signals were collected using Next-Gen Clinical Emotion-inducing visual stimuli to construct the TYUT3.0 dataset, from which EEG and tNIRS features were extracted and mapped into matrices. A multi-brain region attention mechanism was applied to integrate EEG and tNIRS features, assigning different weights to features from distinct brain regions to obtain high-quality primary capsules. Additionally, a capsule network module was introduced to optimize the number of capsules entering the dynamic routing mechanism, improving computational efficiency. Experimental validation on the TYUT3.0 Next-Gen Clinical Emotion dataset demonstrates that integrating EEG and tNIRS improves recognition accuracy by 1.53% and 14.35% compared to single-modality signals. Moreover, the At-CapNet model achieves an average accuracy improvement of 4.98% over the original CapsNet model and outperforms existing CapsNet-based Next-Gen Clinical Emotion recognition models by 1% to 5%. This research contributes to the advancement of non-invasive neurotechnology for precise Next-Gen Clinical Emotion recognition, with potential implications for next-generation clinical diagnostics and interventions.
基于深度学习的多脑胶囊网络在新一代临床情绪识别中的应用
深度学习技术对下一代临床应用至关重要,特别是在下一代临床情绪识别方面。为了提高分类精度,提出了一种基于注意机制的多脑区胶囊网络模型(At-CapNet)。采用下一代临床情绪诱导视觉刺激采集EEG-tNIRS信号,构建TYUT3.0数据集,提取EEG和tNIRS特征并映射成矩阵。采用多脑区注意机制整合EEG和tNIRS特征,对不同脑区的特征赋予不同权重,获得高质量的初级胶囊。引入胶囊网络模块,优化进入动态路由机制的胶囊数量,提高计算效率。在TYUT3.0下一代临床情绪数据集上进行的实验验证表明,与单模态信号相比,EEG和tNIRS相结合的识别准确率分别提高了1.53%和14.35%。此外,At-CapNet模型比原始CapsNet模型平均准确率提高了4.98%,比现有的基于CapsNet的下一代临床情绪识别模型高出1%至5%。这项研究有助于非侵入性神经技术的进步,以精确的下一代临床情绪识别,对下一代临床诊断和干预具有潜在的意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信