Classification of Motor Imagery Tasks Using EEG Based on Wavelet Scattering Transform and Convolutional Neural Network

IF 2.2 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Rantu Buragohain;Jejariya Ajaybhai;Karan Nathwani;Vinayak Abrol
{"title":"Classification of Motor Imagery Tasks Using EEG Based on Wavelet Scattering Transform and Convolutional Neural Network","authors":"Rantu Buragohain;Jejariya Ajaybhai;Karan Nathwani;Vinayak Abrol","doi":"10.1109/LSENS.2024.3488356","DOIUrl":null,"url":null,"abstract":"Electroencephalogram (EEG) signal classification is of utmost importance in brain-computer interface (BCI) systems. However, the inherent complex properties of EEG signals pose a challenge in their analysis and modeling. This letter proposes a novel approach of integrating wavelet scattering transform (WST) with convolutional neural network (CNN) for classifying motor imagery (MI) via EEG signals (referred as WST-CNN), capable of extracting distinctive characteristics in signals even when the data is limited. In this architecture, the first layer is nontrainable WST features with fixed initializations in WST-CNN. Furthermore, WSTs are robust to local perturbations in data, especially in the form of translation invariance, and resilient to deformations, thereby enhancing the network's reliability. The performance of the proposed idea is evaluated on the DBCIE dataset for three different scenarios: left-arm (LA) movement, right-arm (RA) movement, and simultaneous movement of both arms (BA). The BCI Competition IV-2a dataset was also employed to validate the proposed concept across four distinct MI tasks, such as movements in: left-hand (LH), right-hand (RH), feet (FT), and tongue (T). The classifications' performance was evaluated in terms of accuracy (\n<inline-formula><tex-math>$\\eta$</tex-math></inline-formula>\n), sensitivity (\n<inline-formula><tex-math>$S_{e}$</tex-math></inline-formula>\n), specificity (\n<inline-formula><tex-math>$S_{p}$</tex-math></inline-formula>\n), and weighted F1-score, which reached up to 92.72%, 92.72%, 97.57%, and 92.75% for classifying LH, RH, FT, and T on the BCI Competition IV-2a dataset and 89.19%, 89.19%, 94.60%, and 89.33% for classifying LA, RA, and BA, on the DBCIE dataset, respectively.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"8 12","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10748355/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Electroencephalogram (EEG) signal classification is of utmost importance in brain-computer interface (BCI) systems. However, the inherent complex properties of EEG signals pose a challenge in their analysis and modeling. This letter proposes a novel approach of integrating wavelet scattering transform (WST) with convolutional neural network (CNN) for classifying motor imagery (MI) via EEG signals (referred as WST-CNN), capable of extracting distinctive characteristics in signals even when the data is limited. In this architecture, the first layer is nontrainable WST features with fixed initializations in WST-CNN. Furthermore, WSTs are robust to local perturbations in data, especially in the form of translation invariance, and resilient to deformations, thereby enhancing the network's reliability. The performance of the proposed idea is evaluated on the DBCIE dataset for three different scenarios: left-arm (LA) movement, right-arm (RA) movement, and simultaneous movement of both arms (BA). The BCI Competition IV-2a dataset was also employed to validate the proposed concept across four distinct MI tasks, such as movements in: left-hand (LH), right-hand (RH), feet (FT), and tongue (T). The classifications' performance was evaluated in terms of accuracy ( $\eta$ ), sensitivity ( $S_{e}$ ), specificity ( $S_{p}$ ), and weighted F1-score, which reached up to 92.72%, 92.72%, 97.57%, and 92.75% for classifying LH, RH, FT, and T on the BCI Competition IV-2a dataset and 89.19%, 89.19%, 94.60%, and 89.33% for classifying LA, RA, and BA, on the DBCIE dataset, respectively.
基于小波散射变换和卷积神经网络的脑电图运动意象任务分类法
脑电图(EEG)信号分类在脑机接口(BCI)系统中至关重要。然而,脑电信号固有的复杂特性给其分析和建模带来了挑战。本文提出了一种将小波散射变换(WST)与卷积神经网络(CNN)相结合的新方法,用于通过脑电信号对运动图像(MI)进行分类(简称 WST-CNN),即使在数据有限的情况下也能提取信号中的显著特征。在该架构中,第一层是不可训练的 WST 特征,WST-CNN 具有固定的初始化。此外,WST 对数据中的局部扰动具有鲁棒性,特别是在平移不变性方面,并且对变形具有弹性,从而提高了网络的可靠性。我们在 DBCIE 数据集上评估了所提想法在三种不同情况下的性能:左臂(LA)运动、右臂(RA)运动和双臂同时运动(BA)。此外,BCI Competition IV-2a 数据集还用于验证所提出的概念是否适用于四种不同的 MI 任务,如左手 (LH)、右手 (RH)、脚 (FT) 和舌头 (T) 的运动。对分类的准确度($\eta$)、灵敏度($S_{e}$)、特异度($S_{p}$)和加权 F1 分数进行了评估,结果分别达到 92.72%、92.72%、97.57% 和 92.75%。在 BCI Competition IV-2a 数据集上,LH、RH、FT 和 T 的分类率分别达到 92.72%、92.72%、97.57% 和 92.75%;在 DBCIE 数据集上,LA、RA 和 BA 的分类率分别达到 89.19%、89.19%、94.60% 和 89.33%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Letters
IEEE Sensors Letters Engineering-Electrical and Electronic Engineering
CiteScore
3.50
自引率
7.10%
发文量
194
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信