Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Cheng Cheng, Zikang Yu, Yong Zhang, Lin Feng
{"title":"Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition.","authors":"Cheng Cheng, Zikang Yu, Yong Zhang, Lin Feng","doi":"10.1109/TNNLS.2023.3319315","DOIUrl":null,"url":null,"abstract":"<p><p>The electroencephalogram (EEG) signal has become a highly effective decoding target for emotion recognition and has garnered significant attention from researchers. Its spatial topological and time-dependent characteristics make it crucial to explore both spatial information and temporal information for accurate emotion recognition. However, existing studies often focus on either spatial or temporal aspects of EEG signals, neglecting the joint consideration of both perspectives. To this end, this article proposes a hybrid network consisting of a dynamic graph convolution (DGC) module and temporal self-attention representation (TSAR) module, which concurrently incorporates the representative knowledge of spatial topology and temporal context into the EEG emotion recognition task. Specifically, the DGC module is designed to capture the spatial functional relationships within the brain by dynamically updating the adjacency matrix during the model training process. Simultaneously, the TSAR module is introduced to emphasize more valuable time segments and extract global temporal features from EEG signals. To fully exploit the interactivity between spatial and temporal information, the hierarchical cross-attention fusion (H-CAF) module is incorporated to fuse the complementary information from spatial and temporal features. Extensive experimental results on the DEAP, SEED, and SEED-IV datasets demonstrate that the proposed method outperforms other state-of-the-art methods.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3319315","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The electroencephalogram (EEG) signal has become a highly effective decoding target for emotion recognition and has garnered significant attention from researchers. Its spatial topological and time-dependent characteristics make it crucial to explore both spatial information and temporal information for accurate emotion recognition. However, existing studies often focus on either spatial or temporal aspects of EEG signals, neglecting the joint consideration of both perspectives. To this end, this article proposes a hybrid network consisting of a dynamic graph convolution (DGC) module and temporal self-attention representation (TSAR) module, which concurrently incorporates the representative knowledge of spatial topology and temporal context into the EEG emotion recognition task. Specifically, the DGC module is designed to capture the spatial functional relationships within the brain by dynamically updating the adjacency matrix during the model training process. Simultaneously, the TSAR module is introduced to emphasize more valuable time segments and extract global temporal features from EEG signals. To fully exploit the interactivity between spatial and temporal information, the hierarchical cross-attention fusion (H-CAF) module is incorporated to fuse the complementary information from spatial and temporal features. Extensive experimental results on the DEAP, SEED, and SEED-IV datasets demonstrate that the proposed method outperforms other state-of-the-art methods.

基于动态图卷积和时间自注意的混合网络用于基于EEG的情绪识别。
脑电图(EEG)信号已成为情绪识别的高效解码目标,并引起了研究人员的极大关注。它的空间拓扑和时间相关特性使得探索空间信息和时间信息对于准确的情绪识别至关重要。然而,现有的研究往往侧重于EEG信号的空间或时间方面,忽略了对这两个方面的联合考虑。为此,本文提出了一种由动态图卷积(DGC)模块和时间自注意表示(TSAR)模块组成的混合网络,该混合网络将空间拓扑和时间上下文的代表性知识同时融入EEG情绪识别任务中。具体而言,DGC模块旨在通过在模型训练过程中动态更新邻接矩阵来捕捉大脑内的空间功能关系。同时,引入TSAR模块来强调更有价值的时间段,并从EEG信号中提取全局时间特征。为了充分利用空间和时间信息之间的互动性,引入了层次交叉注意力融合(H-CAF)模块来融合来自空间和时间特征的互补信息。在DEAP、SEED和SEED-IV数据集上的大量实验结果表明,所提出的方法优于其他最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信