Discriminative Adversarial Network Based on Spatial–Temporal–Graph Fusion for Motor Imagery Recognition

IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Qingshan She;Tie Chen;Feng Fang;Yunyuan Gao;Yingchun Zhang
{"title":"Discriminative Adversarial Network Based on Spatial–Temporal–Graph Fusion for Motor Imagery Recognition","authors":"Qingshan She;Tie Chen;Feng Fang;Yunyuan Gao;Yingchun Zhang","doi":"10.1109/TCSS.2024.3462823","DOIUrl":null,"url":null,"abstract":"Motor imagery (MI)-based electroencephalography (EEG) stands as a prominent paradigm in the brain–computer interface (BCI) field, which is frequently applied in neural rehabilitation and gaming due to its accessibility and reliability. Despite extensive research dedicated to MI EEG classification algorithms, a notable deficiency still remains: their performance is often optimal only in subject-specific or dataset-specific scenarios, which undermines their generalization capability, hence restricting BCI systems' practical utility in real-world contexts. To address this limitation, this study introduces a cutting-edge approach: a discriminative adversarial network based on spatial–temporal–graph fusion (STG-DAN). This innovation aims to learn features that are not only class-discriminative but also domain-invariant. Specifically, the feature extraction module guarantees the feature discriminativeness by amalgamating spatial–temporal and graph-related features, while the domain alignment module focuses on both global domain and local subdomain. The two modules are incorporated into one adversarial learning framework to facilitate the acquisition of domain-invariant features. Evaluations on two publicly accessible datasets, BCI competition IV 2a and OpenBMI, affirm the superiority of our proposed model (averaged accuracy = 62.94% and 73.01% for the two datasets in cross-subject circumstance, respectively). In cross-dataset circumstances, it also outperforms several state-of-the-art algorithms, attesting to the potency of STG-DAN.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"972-983"},"PeriodicalIF":4.5000,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Social Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10706623/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

Motor imagery (MI)-based electroencephalography (EEG) stands as a prominent paradigm in the brain–computer interface (BCI) field, which is frequently applied in neural rehabilitation and gaming due to its accessibility and reliability. Despite extensive research dedicated to MI EEG classification algorithms, a notable deficiency still remains: their performance is often optimal only in subject-specific or dataset-specific scenarios, which undermines their generalization capability, hence restricting BCI systems' practical utility in real-world contexts. To address this limitation, this study introduces a cutting-edge approach: a discriminative adversarial network based on spatial–temporal–graph fusion (STG-DAN). This innovation aims to learn features that are not only class-discriminative but also domain-invariant. Specifically, the feature extraction module guarantees the feature discriminativeness by amalgamating spatial–temporal and graph-related features, while the domain alignment module focuses on both global domain and local subdomain. The two modules are incorporated into one adversarial learning framework to facilitate the acquisition of domain-invariant features. Evaluations on two publicly accessible datasets, BCI competition IV 2a and OpenBMI, affirm the superiority of our proposed model (averaged accuracy = 62.94% and 73.01% for the two datasets in cross-subject circumstance, respectively). In cross-dataset circumstances, it also outperforms several state-of-the-art algorithms, attesting to the potency of STG-DAN.
基于时空图融合的判别对抗网络运动图像识别
基于运动图像(MI)的脑电图(EEG)是脑机接口(BCI)领域的一个突出范例,由于其可及性和可靠性,在神经康复和游戏领域得到了广泛的应用。尽管对脑机脑电信号分类算法进行了广泛的研究,但一个明显的缺陷仍然存在:它们的性能通常仅在特定主题或特定数据集的场景下才最优,这削弱了它们的泛化能力,从而限制了脑机接口系统在现实世界中的实际应用。为了解决这一限制,本研究引入了一种前沿方法:基于时空图融合(STG-DAN)的判别对抗网络。这一创新旨在学习不仅是类区分而且是领域不变的特征。其中,特征提取模块通过融合时空特征和图相关特征来保证特征的判别性,而领域对齐模块则同时关注全局域和局部子域。这两个模块被整合到一个对抗学习框架中,以促进域不变特征的获取。对两个可公开访问的数据集(BCI competition IV 2a和OpenBMI)的评估证实了我们提出的模型的优越性(两个数据集在交叉学科情况下的平均准确率分别为62.94%和73.01%)。在跨数据集的情况下,它也优于几个最先进的算法,证明了STG-DAN的效力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Computational Social Systems
IEEE Transactions on Computational Social Systems Social Sciences-Social Sciences (miscellaneous)
CiteScore
10.00
自引率
20.00%
发文量
316
期刊介绍: IEEE Transactions on Computational Social Systems focuses on such topics as modeling, simulation, analysis and understanding of social systems from the quantitative and/or computational perspective. "Systems" include man-man, man-machine and machine-machine organizations and adversarial situations as well as social media structures and their dynamics. More specifically, the proposed transactions publishes articles on modeling the dynamics of social systems, methodologies for incorporating and representing socio-cultural and behavioral aspects in computational modeling, analysis of social system behavior and structure, and paradigms for social systems modeling and simulation. The journal also features articles on social network dynamics, social intelligence and cognition, social systems design and architectures, socio-cultural modeling and representation, and computational behavior modeling, and their applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信