A Progressive Multi-Domain Adaptation Network With Reinforced Self-Constructed Graphs for Cross-Subject EEG-Based Emotion and Consciousness Recognition

IF 5.2 2区 医学 Q2 ENGINEERING, BIOMEDICAL
Rongtao Chen;Chuwen Xie;Jiahui Zhang;Qi You;Jiahui Pan
{"title":"A Progressive Multi-Domain Adaptation Network With Reinforced Self-Constructed Graphs for Cross-Subject EEG-Based Emotion and Consciousness Recognition","authors":"Rongtao Chen;Chuwen Xie;Jiahui Zhang;Qi You;Jiahui Pan","doi":"10.1109/TNSRE.2025.3603190","DOIUrl":null,"url":null,"abstract":"Electroencephalogram (EEG)-based emotion recognition is a vital component in brain-computer interface applications. However, it faces two significant challenges: 1) extracting domain-invariant features while effectively preserving emotion-related information, and 2) aligning the joint probability distributions of data across different individuals. To address these challenges, we propose a progressive multi-domain adaptation network with reinforced self-constructed graphs. Specifically, we introduce EEG-CutMix to construct unlabeled mixed-domain data, facilitating the transition between source and target domains. Additionally, a reinforced self-constructed graphs module is employed to extract domain-invariant features. Finally, a progressive multi-domain adaptation framework is constructed to smoothly align the data distributions across individuals. Experiments on cross-subject datasets demonstrate that our model achieves state-of-the-art performance on the SEED and SEED-IV datasets, with accuracies of 97.03% <inline-formula> <tex-math>$\\pm ~1.65$ </tex-math></inline-formula>% and 88.18% <inline-formula> <tex-math>$\\pm ~4.55$ </tex-math></inline-formula>%, respectively. Furthermore, tests on a self-recorded dataset, comprising ten healthy subjects and twelve patients with disorders of consciousness (DOC), show that our model achieves a mean accuracy of 86.65% <inline-formula> <tex-math>$\\pm ~2.28$ </tex-math></inline-formula>% in healthy subjects. Notably, it successfully applies to DOC patients, with four subjects achieving emotion recognition accuracy exceeding 70%. These results validate the effectiveness of our model in EEG emotion recognition and highlight its potential for assessing consciousness levels in DOC patients. The source code for the proposed model is available at GitHub-seizeall/mycode.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"33 ","pages":"3498-3510"},"PeriodicalIF":5.2000,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11142795","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11142795/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Electroencephalogram (EEG)-based emotion recognition is a vital component in brain-computer interface applications. However, it faces two significant challenges: 1) extracting domain-invariant features while effectively preserving emotion-related information, and 2) aligning the joint probability distributions of data across different individuals. To address these challenges, we propose a progressive multi-domain adaptation network with reinforced self-constructed graphs. Specifically, we introduce EEG-CutMix to construct unlabeled mixed-domain data, facilitating the transition between source and target domains. Additionally, a reinforced self-constructed graphs module is employed to extract domain-invariant features. Finally, a progressive multi-domain adaptation framework is constructed to smoothly align the data distributions across individuals. Experiments on cross-subject datasets demonstrate that our model achieves state-of-the-art performance on the SEED and SEED-IV datasets, with accuracies of 97.03% $\pm ~1.65$ % and 88.18% $\pm ~4.55$ %, respectively. Furthermore, tests on a self-recorded dataset, comprising ten healthy subjects and twelve patients with disorders of consciousness (DOC), show that our model achieves a mean accuracy of 86.65% $\pm ~2.28$ % in healthy subjects. Notably, it successfully applies to DOC patients, with four subjects achieving emotion recognition accuracy exceeding 70%. These results validate the effectiveness of our model in EEG emotion recognition and highlight its potential for assessing consciousness levels in DOC patients. The source code for the proposed model is available at GitHub-seizeall/mycode.
基于脑电的跨主体情绪和意识识别的渐进式多域自适应网络。
基于脑电图的情绪识别是脑机接口应用的重要组成部分。然而,它面临两个重大挑战:(1)在有效保留情绪相关信息的同时提取领域不变特征;(2)对齐不同个体之间数据的联合概率分布。为了解决这些问题,我们提出了一种带有增强自构造图的渐进式多域自适应网络。具体来说,我们引入EEG-CutMix来构建未标记的混合域数据,促进源域和目标域之间的转换。此外,采用增强的自构造图模块提取域不变特征。最后,构建了一种渐进式多域自适应框架,以平滑地对齐个体间的数据分布。在跨主题数据集上的实验表明,我们的模型在SEED和SEED- iv数据集上达到了最先进的性能,准确率分别为97.03%±1.65%和88.18%±4.55%。此外,在由10名健康受试者和12名意识障碍患者(DOC)组成的自记录数据集上进行的测试表明,我们的模型在健康受试者中的平均准确率为86.65%±2.28%。值得注意的是,它成功地应用于DOC患者,有4名受试者的情绪识别准确率超过70%。这些结果验证了我们的模型在脑电图情绪识别中的有效性,并强调了它在评估DOC患者意识水平方面的潜力。建议模型的源代码可在GitHub-seizeall/mycode中获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.60
自引率
8.20%
发文量
479
审稿时长
6-12 weeks
期刊介绍: Rehabilitative and neural aspects of biomedical engineering, including functional electrical stimulation, acoustic dynamics, human performance measurement and analysis, nerve stimulation, electromyography, motor control and stimulation; and hardware and software applications for rehabilitation engineering and assistive devices.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信