{"title":"Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification","authors":"Xiaoxiao Yang, Ziyu Jia","doi":"arxiv-2409.09627","DOIUrl":null,"url":null,"abstract":"Motor imagery (MI) classification is key for brain-computer interfaces\n(BCIs). Until recent years, numerous models had been proposed, ranging from\nclassical algorithms like Common Spatial Pattern (CSP) to deep learning models\nsuch as convolutional neural networks (CNNs) and transformers. However, these\nmodels have shown limitations in areas such as generalizability, contextuality\nand scalability when it comes to effectively extracting the complex\nspatial-temporal information inherent in electroencephalography (EEG) signals.\nTo address these limitations, we introduce Spatial-Temporal Mamba Network\n(STMambaNet), an innovative model leveraging the Mamba state space\narchitecture, which excels in processing extended sequences with linear\nscalability. By incorporating spatial and temporal Mamba encoders, STMambaNet\neffectively captures the intricate dynamics in both space and time,\nsignificantly enhancing the decoding performance of EEG signals for MI\nclassification. Experimental results on BCI Competition IV 2a and 2b datasets\ndemonstrate STMambaNet's superiority over existing models, establishing it as a\npowerful tool for advancing MI-based BCIs and improving real-world BCI systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motor imagery (MI) classification is key for brain-computer interfaces
(BCIs). Until recent years, numerous models had been proposed, ranging from
classical algorithms like Common Spatial Pattern (CSP) to deep learning models
such as convolutional neural networks (CNNs) and transformers. However, these
models have shown limitations in areas such as generalizability, contextuality
and scalability when it comes to effectively extracting the complex
spatial-temporal information inherent in electroencephalography (EEG) signals.
To address these limitations, we introduce Spatial-Temporal Mamba Network
(STMambaNet), an innovative model leveraging the Mamba state space
architecture, which excels in processing extended sequences with linear
scalability. By incorporating spatial and temporal Mamba encoders, STMambaNet
effectively captures the intricate dynamics in both space and time,
significantly enhancing the decoding performance of EEG signals for MI
classification. Experimental results on BCI Competition IV 2a and 2b datasets
demonstrate STMambaNet's superiority over existing models, establishing it as a
powerful tool for advancing MI-based BCIs and improving real-world BCI systems.