{"title":"通过时空表征学习脑电运动特征","authors":"Tian-Yu Xiang;Xiao-Hu Zhou;Xiao-Liang Xie;Shi-Qi Liu;Hong-Jun Yang;Zhen-Qiu Feng;Mei-Jiang Gui;Hao Li;De-Xing Huang;Xiu-Ling Liu;Zeng-Guang Hou","doi":"10.1109/TETCI.2024.3425328","DOIUrl":null,"url":null,"abstract":"Electroencephalogram (EEG) is a widely used neural imaging technique for modeling motor characteristics. However, current studies have primarily focused on temporal representations of EEG, with less emphasis on the spatial and functional connections among electrodes. This study introduces a novel two-stream model to analyze both temporal and spatial representations of EEG for learning motor characteristics. Temporal representations are extracted with a set of convolutional neural networks (CNN) treated as dynamic filters, while spatial representations are learned by graph neural networks (GNN) using learnable adjacency matrices. At each stage, a res-block is designed to integrate temporal and spatial representations, facilitating a fusion of temporal-spatial information. Finally, the summarized representations of both streams are fused with fully connected neural networks to learn motor characteristics. Experimental evaluations on the Physionet, OpenBMI, and BCI Competition IV Dataset 2a demonstrate the model's efficacy, achieving accuracies of <inline-formula><tex-math>$73.6\\%/70.4\\%$</tex-math></inline-formula> for four-class subject-dependent/independent paradigms, <inline-formula><tex-math>$84.2\\%/82.0\\%$</tex-math></inline-formula> for two-class subject-dependent/independent paradigms, and 78.5% for a four-class subject-dependent paradigm, respectively. The encouraged results underscore the model's potential in understanding EEG-based motor characteristics, paving the way for advanced brain-computer interface systems.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"933-945"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning EEG Motor Characteristics via Temporal-Spatial Representations\",\"authors\":\"Tian-Yu Xiang;Xiao-Hu Zhou;Xiao-Liang Xie;Shi-Qi Liu;Hong-Jun Yang;Zhen-Qiu Feng;Mei-Jiang Gui;Hao Li;De-Xing Huang;Xiu-Ling Liu;Zeng-Guang Hou\",\"doi\":\"10.1109/TETCI.2024.3425328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electroencephalogram (EEG) is a widely used neural imaging technique for modeling motor characteristics. However, current studies have primarily focused on temporal representations of EEG, with less emphasis on the spatial and functional connections among electrodes. This study introduces a novel two-stream model to analyze both temporal and spatial representations of EEG for learning motor characteristics. Temporal representations are extracted with a set of convolutional neural networks (CNN) treated as dynamic filters, while spatial representations are learned by graph neural networks (GNN) using learnable adjacency matrices. At each stage, a res-block is designed to integrate temporal and spatial representations, facilitating a fusion of temporal-spatial information. Finally, the summarized representations of both streams are fused with fully connected neural networks to learn motor characteristics. Experimental evaluations on the Physionet, OpenBMI, and BCI Competition IV Dataset 2a demonstrate the model's efficacy, achieving accuracies of <inline-formula><tex-math>$73.6\\\\%/70.4\\\\%$</tex-math></inline-formula> for four-class subject-dependent/independent paradigms, <inline-formula><tex-math>$84.2\\\\%/82.0\\\\%$</tex-math></inline-formula> for two-class subject-dependent/independent paradigms, and 78.5% for a four-class subject-dependent paradigm, respectively. The encouraged results underscore the model's potential in understanding EEG-based motor characteristics, paving the way for advanced brain-computer interface systems.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"9 1\",\"pages\":\"933-945\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10663067/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10663067/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
脑电图(EEG)是一种广泛使用的神经成像技术,用于模拟运动特征。然而,目前的研究主要集中在脑电图的时间表征上,对电极之间的空间和功能联系的关注较少。本研究引入一种新的双流模型来分析脑电图的时间表征和空间表征。时间表征是通过一组卷积神经网络(CNN)作为动态过滤器来提取的,而空间表征是通过使用可学习邻接矩阵的图神经网络(GNN)来学习的。在每个阶段,设计了一个re -block来整合时间和空间表示,促进时空信息的融合。最后,将两个流的总结表示与全连接的神经网络融合以学习运动特征。在Physionet、OpenBMI和BCI Competition IV Dataset 2a上的实验评估证明了该模型的有效性,四类主体依赖/独立范式的准确率分别为73.6% / 70.4%,两类主体依赖/独立范式的准确率为84.2% /82.0,四类主体依赖范式的准确率分别为78.5%。令人鼓舞的结果强调了该模型在理解基于脑电图的运动特征方面的潜力,为先进的脑机接口系统铺平了道路。
Learning EEG Motor Characteristics via Temporal-Spatial Representations
Electroencephalogram (EEG) is a widely used neural imaging technique for modeling motor characteristics. However, current studies have primarily focused on temporal representations of EEG, with less emphasis on the spatial and functional connections among electrodes. This study introduces a novel two-stream model to analyze both temporal and spatial representations of EEG for learning motor characteristics. Temporal representations are extracted with a set of convolutional neural networks (CNN) treated as dynamic filters, while spatial representations are learned by graph neural networks (GNN) using learnable adjacency matrices. At each stage, a res-block is designed to integrate temporal and spatial representations, facilitating a fusion of temporal-spatial information. Finally, the summarized representations of both streams are fused with fully connected neural networks to learn motor characteristics. Experimental evaluations on the Physionet, OpenBMI, and BCI Competition IV Dataset 2a demonstrate the model's efficacy, achieving accuracies of $73.6\%/70.4\%$ for four-class subject-dependent/independent paradigms, $84.2\%/82.0\%$ for two-class subject-dependent/independent paradigms, and 78.5% for a four-class subject-dependent paradigm, respectively. The encouraged results underscore the model's potential in understanding EEG-based motor characteristics, paving the way for advanced brain-computer interface systems.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.