用于脊髓损伤的时频空间变压器脑电图解码

IF 3.1 3区 工程技术 Q2 NEUROSCIENCES
Fangzhou Xu, Ming Liu, Xinyi Chen, Yihao Yan, Jinzhao Zhao, Yanbing Liu, Jiaqi Zhao, Shaopeng Pang, Sen Yin, Jiancai Leng, Yang Zhang
{"title":"用于脊髓损伤的时频空间变压器脑电图解码","authors":"Fangzhou Xu, Ming Liu, Xinyi Chen, Yihao Yan, Jinzhao Zhao, Yanbing Liu, Jiaqi Zhao, Shaopeng Pang, Sen Yin, Jiancai Leng, Yang Zhang","doi":"10.1007/s11571-024-10135-8","DOIUrl":null,"url":null,"abstract":"<p>Transformer neural networks based on multi-head self-attention are effective in several fields. To capture brain activity on electroencephalographic (EEG) signals and construct an effective pattern recognition model, this paper explores the multi-channel deep feature decoding method utilizing the self-attention mechanism. By integrating inter-channel features with intra-channel features, the self-attention mechanism generates a deep feature vector that encompasses information from all brain activities. In this paper, a time-frequency-spatial domain analysis of motor imagery (MI) based EEG signals from spinal cord injury patients is performed to construct a transformer neural network-based MI classification model. The proposed algorithm is named time-frequency-spatial transformer. The time-frequency and spatial domain feature vectors extracted from the EEG signals are input into the transformer neural network for multiple self-attention depth feature encoding, a peak classification accuracy of 93.56% is attained through the fully connected layer. By constructing the attention matrix brain network, it can be inferred that the channel connections constructed by the attention heads have similarities to the brain networks constructed by the EEG raw signals. The experimental results reveal that the self-attention coefficient brain network holds significant potential for brain activity analysis. The self-attention coefficient brain network can better illustrate correlated connections and show sample differences. Attention coefficient brain networks can provide a more discriminative approach for analyzing brain activity in clinical settings.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Time–frequency–space transformer EEG decoding for spinal cord injury\",\"authors\":\"Fangzhou Xu, Ming Liu, Xinyi Chen, Yihao Yan, Jinzhao Zhao, Yanbing Liu, Jiaqi Zhao, Shaopeng Pang, Sen Yin, Jiancai Leng, Yang Zhang\",\"doi\":\"10.1007/s11571-024-10135-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Transformer neural networks based on multi-head self-attention are effective in several fields. To capture brain activity on electroencephalographic (EEG) signals and construct an effective pattern recognition model, this paper explores the multi-channel deep feature decoding method utilizing the self-attention mechanism. By integrating inter-channel features with intra-channel features, the self-attention mechanism generates a deep feature vector that encompasses information from all brain activities. In this paper, a time-frequency-spatial domain analysis of motor imagery (MI) based EEG signals from spinal cord injury patients is performed to construct a transformer neural network-based MI classification model. The proposed algorithm is named time-frequency-spatial transformer. The time-frequency and spatial domain feature vectors extracted from the EEG signals are input into the transformer neural network for multiple self-attention depth feature encoding, a peak classification accuracy of 93.56% is attained through the fully connected layer. By constructing the attention matrix brain network, it can be inferred that the channel connections constructed by the attention heads have similarities to the brain networks constructed by the EEG raw signals. The experimental results reveal that the self-attention coefficient brain network holds significant potential for brain activity analysis. The self-attention coefficient brain network can better illustrate correlated connections and show sample differences. Attention coefficient brain networks can provide a more discriminative approach for analyzing brain activity in clinical settings.</p>\",\"PeriodicalId\":10500,\"journal\":{\"name\":\"Cognitive Neurodynamics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Neurodynamics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11571-024-10135-8\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-024-10135-8","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

基于多头自注意的变压器神经网络在多个领域都很有效。为了捕捉脑电图(EEG)信号上的大脑活动并构建有效的模式识别模型,本文探讨了利用自注意机制的多通道深度特征解码方法。通过将信道间特征与信道内特征相结合,自注意机制生成了一个包含所有脑活动信息的深度特征向量。本文对脊髓损伤患者基于运动意象(MI)的脑电信号进行了时频空间域分析,构建了基于变压器神经网络的运动意象分类模型。所提出的算法被命名为时频空间变换器。将从脑电信号中提取的时频和空间域特征向量输入变压器神经网络,进行多重自注意深度特征编码,通过全连接层,分类准确率达到 93.56%。通过构建注意力矩阵脑网络,可以推断注意力头构建的通道连接与脑电图原始信号构建的脑网络具有相似性。实验结果表明,自我注意力系数脑网络在脑活动分析方面具有巨大潜力。自我注意力系数脑网络能更好地说明相关连接并显示样本差异。注意力系数脑网络可以为临床环境中的脑活动分析提供一种更具鉴别性的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Time–frequency–space transformer EEG decoding for spinal cord injury

Time–frequency–space transformer EEG decoding for spinal cord injury

Transformer neural networks based on multi-head self-attention are effective in several fields. To capture brain activity on electroencephalographic (EEG) signals and construct an effective pattern recognition model, this paper explores the multi-channel deep feature decoding method utilizing the self-attention mechanism. By integrating inter-channel features with intra-channel features, the self-attention mechanism generates a deep feature vector that encompasses information from all brain activities. In this paper, a time-frequency-spatial domain analysis of motor imagery (MI) based EEG signals from spinal cord injury patients is performed to construct a transformer neural network-based MI classification model. The proposed algorithm is named time-frequency-spatial transformer. The time-frequency and spatial domain feature vectors extracted from the EEG signals are input into the transformer neural network for multiple self-attention depth feature encoding, a peak classification accuracy of 93.56% is attained through the fully connected layer. By constructing the attention matrix brain network, it can be inferred that the channel connections constructed by the attention heads have similarities to the brain networks constructed by the EEG raw signals. The experimental results reveal that the self-attention coefficient brain network holds significant potential for brain activity analysis. The self-attention coefficient brain network can better illustrate correlated connections and show sample differences. Attention coefficient brain networks can provide a more discriminative approach for analyzing brain activity in clinical settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Neurodynamics
Cognitive Neurodynamics 医学-神经科学
CiteScore
6.90
自引率
18.90%
发文量
140
审稿时长
12 months
期刊介绍: Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models. The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome. The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged. 1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics. 2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages. 3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信