{"title":"LGFormer:为脑电图解码整合局部和全局表征","authors":"Wenjie Yang, Xingfu Wang, Wenxia Qi, Wei Wang","doi":"10.1088/1741-2552/adc5a3","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Electroencephalography (EEG) decoding is challenging because of its temporal variability and low signal-to-noise ratio, which complicate the extraction of meaningful information from signals. Although convolutional neural networks (CNNs) effectively extract local features from EEG signals, they are constrained by restricted receptive fields. In contrast, transformers excel at capturing global dependencies through self-attention mechanisms but often require extensive training data and computational resources, which limits their efficiency on EEG datasets with limited samples.<i>Approach.</i>In this paper, we propose LGFormer, a hybrid network designed to efficiently learn both local and global representations for EEG decoding. LGFormer employs a deep attention module to extract global information from EEG signals, dynamically adjusting the focus of CNNs. Subsequently, LGFormer incorporates a local-enhanced transformer, combining the strengths of CNNs and transformers to achieve multiscale perception from local to global. Despite integrating multiple advanced techniques, LGFormer maintains a lightweight design and training efficiency.<i>Main results.</i>LGFormer achieves state-of-the-art performance within 200 training epochs across four public datasets, including motor imagery, cognitive workload, and error-related negativity decoding tasks. Additionally, we propose a novel spatial and temporal attention visualization method, revealing that LGFormer captures discriminative spatial and temporal features, enhancing model interpretability and providing insights into its decision-making process.<i>Significance.</i>In summary, LGFormer demonstrates superior performance while maintaining high training efficiency across different tasks, highlighting its potential as a versatile and practical model for EEG decoding.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LGFormer: integrating local and global representations for EEG decoding.\",\"authors\":\"Wenjie Yang, Xingfu Wang, Wenxia Qi, Wei Wang\",\"doi\":\"10.1088/1741-2552/adc5a3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective.</i>Electroencephalography (EEG) decoding is challenging because of its temporal variability and low signal-to-noise ratio, which complicate the extraction of meaningful information from signals. Although convolutional neural networks (CNNs) effectively extract local features from EEG signals, they are constrained by restricted receptive fields. In contrast, transformers excel at capturing global dependencies through self-attention mechanisms but often require extensive training data and computational resources, which limits their efficiency on EEG datasets with limited samples.<i>Approach.</i>In this paper, we propose LGFormer, a hybrid network designed to efficiently learn both local and global representations for EEG decoding. LGFormer employs a deep attention module to extract global information from EEG signals, dynamically adjusting the focus of CNNs. Subsequently, LGFormer incorporates a local-enhanced transformer, combining the strengths of CNNs and transformers to achieve multiscale perception from local to global. Despite integrating multiple advanced techniques, LGFormer maintains a lightweight design and training efficiency.<i>Main results.</i>LGFormer achieves state-of-the-art performance within 200 training epochs across four public datasets, including motor imagery, cognitive workload, and error-related negativity decoding tasks. Additionally, we propose a novel spatial and temporal attention visualization method, revealing that LGFormer captures discriminative spatial and temporal features, enhancing model interpretability and providing insights into its decision-making process.<i>Significance.</i>In summary, LGFormer demonstrates superior performance while maintaining high training efficiency across different tasks, highlighting its potential as a versatile and practical model for EEG decoding.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/adc5a3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/adc5a3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LGFormer: integrating local and global representations for EEG decoding.
Objective.Electroencephalography (EEG) decoding is challenging because of its temporal variability and low signal-to-noise ratio, which complicate the extraction of meaningful information from signals. Although convolutional neural networks (CNNs) effectively extract local features from EEG signals, they are constrained by restricted receptive fields. In contrast, transformers excel at capturing global dependencies through self-attention mechanisms but often require extensive training data and computational resources, which limits their efficiency on EEG datasets with limited samples.Approach.In this paper, we propose LGFormer, a hybrid network designed to efficiently learn both local and global representations for EEG decoding. LGFormer employs a deep attention module to extract global information from EEG signals, dynamically adjusting the focus of CNNs. Subsequently, LGFormer incorporates a local-enhanced transformer, combining the strengths of CNNs and transformers to achieve multiscale perception from local to global. Despite integrating multiple advanced techniques, LGFormer maintains a lightweight design and training efficiency.Main results.LGFormer achieves state-of-the-art performance within 200 training epochs across four public datasets, including motor imagery, cognitive workload, and error-related negativity decoding tasks. Additionally, we propose a novel spatial and temporal attention visualization method, revealing that LGFormer captures discriminative spatial and temporal features, enhancing model interpretability and providing insights into its decision-making process.Significance.In summary, LGFormer demonstrates superior performance while maintaining high training efficiency across different tasks, highlighting its potential as a versatile and practical model for EEG decoding.