From Frequency to Temporal: Three Simple Steps Achieve Lightweight High-Performance Motor Imagery Decoding.

IF 4.5 2区 医学 Q2 ENGINEERING, BIOMEDICAL
Yuan Li, Diwei Su, Xiaonan Yang, Xiangcun Wang, Hongxi Zhao, Jiacai Zhang
{"title":"From Frequency to Temporal: Three Simple Steps Achieve Lightweight High-Performance Motor Imagery Decoding.","authors":"Yuan Li, Diwei Su, Xiaonan Yang, Xiangcun Wang, Hongxi Zhao, Jiacai Zhang","doi":"10.1109/TBME.2025.3579528","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To address the challenges of high data noise and substantial model computational complexity in Electroencephalography (EEG)-based motor imagery decoding, this study aims to develop a decoding method with both high accuracy and low computational cost.</p><p><strong>Methods: </strong>First, frequency domain analysis was performed to reveal the frequency modeling patterns of deep learning models. Utilizing prior knowledge from brain science regarding the key frequency bands for motor imagery, we adjusted the convolution kernels and pooling sizes of EEGNet to focus on effective frequency bands. Subsequently, a residual network was introduced to preserve high-frequency detailed features. Finally, temporal convolution modules were used to deeply capture temporal dependencies, significantly enhancing feature discriminability.</p><p><strong>Results: </strong>Experiments were conducted on the BCI Competition IV 2a and 2b datasets. Our method achieved average classification accuracies of 86.23% and 86.75% respectively, surpassing advanced models like EEG-Conformer and EEG-TransNet. Meanwhile, the Multiply-accumulate operations (MACs) were 27.16M, a reduction of over 50% compared to the comparison models, and the Forward/Backward Pass Size was 14.33MB.</p><p><strong>Conclusion: </strong>By integrating prior knowledge from brain science with deep learning techniques-specifically frequency domain analysis, residual networks, and temporal convolutions-it is possible to effectively improve the accuracy of EEG motor imagery decoding while substantially reducing model computational complexity.</p><p><strong>Significance: </strong>This paper employs the simplest and most fundamental techniques in its design, highlighting the critical role of brain science knowledge in model development. The proposed method demonstrates broad application potential.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TBME.2025.3579528","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: To address the challenges of high data noise and substantial model computational complexity in Electroencephalography (EEG)-based motor imagery decoding, this study aims to develop a decoding method with both high accuracy and low computational cost.

Methods: First, frequency domain analysis was performed to reveal the frequency modeling patterns of deep learning models. Utilizing prior knowledge from brain science regarding the key frequency bands for motor imagery, we adjusted the convolution kernels and pooling sizes of EEGNet to focus on effective frequency bands. Subsequently, a residual network was introduced to preserve high-frequency detailed features. Finally, temporal convolution modules were used to deeply capture temporal dependencies, significantly enhancing feature discriminability.

Results: Experiments were conducted on the BCI Competition IV 2a and 2b datasets. Our method achieved average classification accuracies of 86.23% and 86.75% respectively, surpassing advanced models like EEG-Conformer and EEG-TransNet. Meanwhile, the Multiply-accumulate operations (MACs) were 27.16M, a reduction of over 50% compared to the comparison models, and the Forward/Backward Pass Size was 14.33MB.

Conclusion: By integrating prior knowledge from brain science with deep learning techniques-specifically frequency domain analysis, residual networks, and temporal convolutions-it is possible to effectively improve the accuracy of EEG motor imagery decoding while substantially reducing model computational complexity.

Significance: This paper employs the simplest and most fundamental techniques in its design, highlighting the critical role of brain science knowledge in model development. The proposed method demonstrates broad application potential.

从频率到时间:实现轻量级高性能运动图像解码的三个简单步骤。
目的:针对基于脑电图(EEG)的运动图像解码存在的数据噪声大、模型计算量大等问题,研究一种高精度、低计算成本的解码方法。方法:首先进行频域分析,揭示深度学习模型的频率建模模式。利用脑科学中关于运动图像关键频段的先验知识,我们调整了EEGNet的卷积核和池化大小,以关注有效频段。随后,引入残差网络来保留高频细节特征。最后,利用时间卷积模块深度捕获时间依赖关系,显著提高特征的可分辨性。结果:在BCI Competition IV的2a和2b数据集上进行了实验。该方法的平均分类准确率分别为86.23%和86.75%,超过了EEG-Conformer和EEG-TransNet等先进模型。同时,乘法累积操作(mac)为27.16M,与比较模型相比减少了50%以上,Forward/Backward Pass Size为14.33MB。结论:通过将脑科学的先验知识与深度学习技术(特别是频域分析、残差网络和时间卷积)相结合,可以有效提高脑电运动图像解码的准确性,同时大大降低模型的计算复杂度。意义:本文采用了最简单、最基本的技术进行设计,突出了脑科学知识在模型开发中的关键作用。该方法具有广泛的应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Biomedical Engineering
IEEE Transactions on Biomedical Engineering 工程技术-工程:生物医学
CiteScore
9.40
自引率
4.30%
发文量
880
审稿时长
2.5 months
期刊介绍: IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信