基于多尺度注意卷积神经网络的运动意象分类

IF 2.3 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Ruiyu Zhao , Ian Daly , Yixin Chen , Weijie Wu , Lifei Liu , Xingyu Wang , Andrzej Cichocki , Jing Jin
{"title":"基于多尺度注意卷积神经网络的运动意象分类","authors":"Ruiyu Zhao ,&nbsp;Ian Daly ,&nbsp;Yixin Chen ,&nbsp;Weijie Wu ,&nbsp;Lifei Liu ,&nbsp;Xingyu Wang ,&nbsp;Andrzej Cichocki ,&nbsp;Jing Jin","doi":"10.1016/j.jneumeth.2025.110578","DOIUrl":null,"url":null,"abstract":"<div><h3>Background:</h3><div>Convolutional neural networks (CNNs) are widely employed in motor imagery (MI) classification. However, due to cumbersome data collection experiments, and limited, noisy, and non-stationary EEG signals, small MI datasets present considerable challenges to the design of these decoding algorithms.</div></div><div><h3>New method:</h3><div>To capture more feature information from inadequately sized data, we propose a new method, a multi-scale attention convolutional neural network (MSAttNet). Our method includes three main components–a multi-band segmentation module, an attention spatial convolution module, and a multi-scale temporal convolution module. First, the multi-band segmentation module adopts a filter bank with overlapping frequency bands to enhance features in the frequency domain. Then, the attention spatial convolution module is used to adaptively adjust different convolutional kernel parameters according to the input through the attention mechanism to capture the features of different datasets. The outputs of the attention spatial convolution module are grouped to perform multi-scale temporal convolution. Finally, the output of the multi-scale temporal convolution module uses the bilinear pooling layer to extract temporal features and perform noise elimination. The extracted features are then classified.</div></div><div><h3>Results:</h3><div>We use four datasets, including <em>BCI Competition IV Dataset IIa</em>, <em>BCI Competition IV Dataset IIb</em>, the <em>OpenBMI</em> dataset and the <em>ECUST-MI</em> dataset, to test our proposed method. MSAttNet achieves accuracies of 78.20%, 84.52%, 75.94% and 78.60% in cross-session experiments, respectively.</div></div><div><h3>Comparison with existing methods</h3><div>: Compared with state-of-the-art algorithms, MSAttNet enhances the decoding performance of MI tasks.</div></div><div><h3>Conclusion:</h3><div>MSAttNet effectively addresses the challenges of MI-EEG datasets, improving decoding performance by robust feature extraction.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110578"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification\",\"authors\":\"Ruiyu Zhao ,&nbsp;Ian Daly ,&nbsp;Yixin Chen ,&nbsp;Weijie Wu ,&nbsp;Lifei Liu ,&nbsp;Xingyu Wang ,&nbsp;Andrzej Cichocki ,&nbsp;Jing Jin\",\"doi\":\"10.1016/j.jneumeth.2025.110578\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background:</h3><div>Convolutional neural networks (CNNs) are widely employed in motor imagery (MI) classification. However, due to cumbersome data collection experiments, and limited, noisy, and non-stationary EEG signals, small MI datasets present considerable challenges to the design of these decoding algorithms.</div></div><div><h3>New method:</h3><div>To capture more feature information from inadequately sized data, we propose a new method, a multi-scale attention convolutional neural network (MSAttNet). Our method includes three main components–a multi-band segmentation module, an attention spatial convolution module, and a multi-scale temporal convolution module. First, the multi-band segmentation module adopts a filter bank with overlapping frequency bands to enhance features in the frequency domain. Then, the attention spatial convolution module is used to adaptively adjust different convolutional kernel parameters according to the input through the attention mechanism to capture the features of different datasets. The outputs of the attention spatial convolution module are grouped to perform multi-scale temporal convolution. Finally, the output of the multi-scale temporal convolution module uses the bilinear pooling layer to extract temporal features and perform noise elimination. The extracted features are then classified.</div></div><div><h3>Results:</h3><div>We use four datasets, including <em>BCI Competition IV Dataset IIa</em>, <em>BCI Competition IV Dataset IIb</em>, the <em>OpenBMI</em> dataset and the <em>ECUST-MI</em> dataset, to test our proposed method. MSAttNet achieves accuracies of 78.20%, 84.52%, 75.94% and 78.60% in cross-session experiments, respectively.</div></div><div><h3>Comparison with existing methods</h3><div>: Compared with state-of-the-art algorithms, MSAttNet enhances the decoding performance of MI tasks.</div></div><div><h3>Conclusion:</h3><div>MSAttNet effectively addresses the challenges of MI-EEG datasets, improving decoding performance by robust feature extraction.</div></div>\",\"PeriodicalId\":16415,\"journal\":{\"name\":\"Journal of Neuroscience Methods\",\"volume\":\"424 \",\"pages\":\"Article 110578\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Neuroscience Methods\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0165027025002225\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165027025002225","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

摘要

背景:卷积神经网络(cnn)被广泛应用于运动意象(MI)分类。然而,由于繁琐的数据收集实验和有限的、有噪声的、非平稳的脑电图信号,小MI数据集对这些解码算法的设计提出了相当大的挑战。新方法:为了从尺寸不足的数据中捕获更多的特征信息,我们提出了一种新的方法,即多尺度注意力卷积神经网络(MSAttNet)。该方法包括三个主要部分:多波段分割模块、注意空间卷积模块和多尺度时间卷积模块。首先,多频带分割模块采用频带重叠的滤波器组增强频域特征;然后,利用注意空间卷积模块,通过注意机制,根据输入自适应调整不同的卷积核参数,捕捉不同数据集的特征;对注意空间卷积模块的输出进行分组,进行多尺度时间卷积。最后,多尺度时间卷积模块的输出使用双线性池化层提取时间特征并进行消噪。然后对提取的特征进行分类。结果:我们使用了四个数据集,包括BCI Competition IV Dataset IIa、BCI Competition IV Dataset IIb、OpenBMI数据集和ECUST-MI数据集来测试我们提出的方法。在跨会话实验中,MSAttNet的准确率分别为78.20%、84.52%、75.94%和78.60%。与现有方法的比较:与现有算法相比,MSAttNet提高了MI任务的解码性能。结论:MSAttNet有效地解决了MI-EEG数据集的挑战,通过鲁棒性特征提取提高了解码性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification

Background:

Convolutional neural networks (CNNs) are widely employed in motor imagery (MI) classification. However, due to cumbersome data collection experiments, and limited, noisy, and non-stationary EEG signals, small MI datasets present considerable challenges to the design of these decoding algorithms.

New method:

To capture more feature information from inadequately sized data, we propose a new method, a multi-scale attention convolutional neural network (MSAttNet). Our method includes three main components–a multi-band segmentation module, an attention spatial convolution module, and a multi-scale temporal convolution module. First, the multi-band segmentation module adopts a filter bank with overlapping frequency bands to enhance features in the frequency domain. Then, the attention spatial convolution module is used to adaptively adjust different convolutional kernel parameters according to the input through the attention mechanism to capture the features of different datasets. The outputs of the attention spatial convolution module are grouped to perform multi-scale temporal convolution. Finally, the output of the multi-scale temporal convolution module uses the bilinear pooling layer to extract temporal features and perform noise elimination. The extracted features are then classified.

Results:

We use four datasets, including BCI Competition IV Dataset IIa, BCI Competition IV Dataset IIb, the OpenBMI dataset and the ECUST-MI dataset, to test our proposed method. MSAttNet achieves accuracies of 78.20%, 84.52%, 75.94% and 78.60% in cross-session experiments, respectively.

Comparison with existing methods

: Compared with state-of-the-art algorithms, MSAttNet enhances the decoding performance of MI tasks.

Conclusion:

MSAttNet effectively addresses the challenges of MI-EEG datasets, improving decoding performance by robust feature extraction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Neuroscience Methods
Journal of Neuroscience Methods 医学-神经科学
CiteScore
7.10
自引率
3.30%
发文量
226
审稿时长
52 days
期刊介绍: The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信