{"title":"基于脑电图的运动图像分类时空曼巴网络","authors":"Xiaoxiao Yang, Ziyu Jia","doi":"arxiv-2409.09627","DOIUrl":null,"url":null,"abstract":"Motor imagery (MI) classification is key for brain-computer interfaces\n(BCIs). Until recent years, numerous models had been proposed, ranging from\nclassical algorithms like Common Spatial Pattern (CSP) to deep learning models\nsuch as convolutional neural networks (CNNs) and transformers. However, these\nmodels have shown limitations in areas such as generalizability, contextuality\nand scalability when it comes to effectively extracting the complex\nspatial-temporal information inherent in electroencephalography (EEG) signals.\nTo address these limitations, we introduce Spatial-Temporal Mamba Network\n(STMambaNet), an innovative model leveraging the Mamba state space\narchitecture, which excels in processing extended sequences with linear\nscalability. By incorporating spatial and temporal Mamba encoders, STMambaNet\neffectively captures the intricate dynamics in both space and time,\nsignificantly enhancing the decoding performance of EEG signals for MI\nclassification. Experimental results on BCI Competition IV 2a and 2b datasets\ndemonstrate STMambaNet's superiority over existing models, establishing it as a\npowerful tool for advancing MI-based BCIs and improving real-world BCI systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification\",\"authors\":\"Xiaoxiao Yang, Ziyu Jia\",\"doi\":\"arxiv-2409.09627\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motor imagery (MI) classification is key for brain-computer interfaces\\n(BCIs). Until recent years, numerous models had been proposed, ranging from\\nclassical algorithms like Common Spatial Pattern (CSP) to deep learning models\\nsuch as convolutional neural networks (CNNs) and transformers. However, these\\nmodels have shown limitations in areas such as generalizability, contextuality\\nand scalability when it comes to effectively extracting the complex\\nspatial-temporal information inherent in electroencephalography (EEG) signals.\\nTo address these limitations, we introduce Spatial-Temporal Mamba Network\\n(STMambaNet), an innovative model leveraging the Mamba state space\\narchitecture, which excels in processing extended sequences with linear\\nscalability. By incorporating spatial and temporal Mamba encoders, STMambaNet\\neffectively captures the intricate dynamics in both space and time,\\nsignificantly enhancing the decoding performance of EEG signals for MI\\nclassification. Experimental results on BCI Competition IV 2a and 2b datasets\\ndemonstrate STMambaNet's superiority over existing models, establishing it as a\\npowerful tool for advancing MI-based BCIs and improving real-world BCI systems.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09627\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
运动图像(MI)分类是脑机接口(BCI)的关键。近年来,从通用空间模式(CSP)等经典算法到卷积神经网络(CNN)和变换器等深度学习模型,人们已经提出了许多模型。为了解决这些局限性,我们引入了空间-时间曼巴网络(STMambaNet),这是一种利用曼巴状态空间架构的创新模型,在处理扩展序列时具有出色的线性可扩展性。通过结合空间和时间曼巴编码器,STMambaNet 有效地捕捉了空间和时间的复杂动态,大大提高了用于 MI 分类的脑电信号的解码性能。在 BCI Competition IV 2a 和 2b 数据集上的实验结果证明了 STMambaNet 优于现有模型,使其成为推进基于 MI 的 BCI 和改进实际 BCI 系统的有力工具。
Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification
Motor imagery (MI) classification is key for brain-computer interfaces
(BCIs). Until recent years, numerous models had been proposed, ranging from
classical algorithms like Common Spatial Pattern (CSP) to deep learning models
such as convolutional neural networks (CNNs) and transformers. However, these
models have shown limitations in areas such as generalizability, contextuality
and scalability when it comes to effectively extracting the complex
spatial-temporal information inherent in electroencephalography (EEG) signals.
To address these limitations, we introduce Spatial-Temporal Mamba Network
(STMambaNet), an innovative model leveraging the Mamba state space
architecture, which excels in processing extended sequences with linear
scalability. By incorporating spatial and temporal Mamba encoders, STMambaNet
effectively captures the intricate dynamics in both space and time,
significantly enhancing the decoding performance of EEG signals for MI
classification. Experimental results on BCI Competition IV 2a and 2b datasets
demonstrate STMambaNet's superiority over existing models, establishing it as a
powerful tool for advancing MI-based BCIs and improving real-world BCI systems.