{"title":"Enhanced electroencephalogram signal classification: A hybrid convolutional neural network with attention-based feature selection.","authors":"Bao Liu, Yuxin Wang, Lei Gao, Zhenxin Cai","doi":"10.1016/j.brainres.2025.149484","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate recognition and classification of motor imagery electroencephalogram (MI-EEG) signals are crucial for the successful implementation of brain-computer interfaces (BCI). However, inherent characteristics in original MI-EEG signals, such as nonlinearity, low signal-to-noise ratios, and large individual variations, present significant challenges for MI-EEG classification using traditional machine learning methods. To address these challenges, we propose an automatic feature extraction method rooted in deep learning for MI-EEG classification. First, original MI-EEG signals undergo noise reduction through discrete wavelet transform and common average reference. To reflect the regularity and specificity of brain neural activities, a convolutional neural network (CNN) is used to extract the time-domain features of MI-EEG. We also extracted spatial features to reflect the activity relationships and connection states of the brain in different regions. This process yields time series containing spatial information, focusing on enhancing crucial feature sequences through talking-heads attention. Finally, more abstract spatial-temporal features are extracted using a temporal convolutional network (TCN), and classification is done through a fully connected layer. Validation experiments based on the BCI Competition IV-2a dataset show that the enhanced EEG model achieves an impressive average classification accuracy of 85.53% for each subject. Compared with CNN, EEGNet, CNN-LSTM and EEG-TCNet, the classification accuracy of this model is improved by 11.24%, 6.90%, 11.18% and 6.13%, respectively. Our work underscores the potential of the proposed model to enhance intention recognition in MI-EEG significantly.</p>","PeriodicalId":9083,"journal":{"name":"Brain Research","volume":" ","pages":"149484"},"PeriodicalIF":2.7000,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.brainres.2025.149484","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate recognition and classification of motor imagery electroencephalogram (MI-EEG) signals are crucial for the successful implementation of brain-computer interfaces (BCI). However, inherent characteristics in original MI-EEG signals, such as nonlinearity, low signal-to-noise ratios, and large individual variations, present significant challenges for MI-EEG classification using traditional machine learning methods. To address these challenges, we propose an automatic feature extraction method rooted in deep learning for MI-EEG classification. First, original MI-EEG signals undergo noise reduction through discrete wavelet transform and common average reference. To reflect the regularity and specificity of brain neural activities, a convolutional neural network (CNN) is used to extract the time-domain features of MI-EEG. We also extracted spatial features to reflect the activity relationships and connection states of the brain in different regions. This process yields time series containing spatial information, focusing on enhancing crucial feature sequences through talking-heads attention. Finally, more abstract spatial-temporal features are extracted using a temporal convolutional network (TCN), and classification is done through a fully connected layer. Validation experiments based on the BCI Competition IV-2a dataset show that the enhanced EEG model achieves an impressive average classification accuracy of 85.53% for each subject. Compared with CNN, EEGNet, CNN-LSTM and EEG-TCNet, the classification accuracy of this model is improved by 11.24%, 6.90%, 11.18% and 6.13%, respectively. Our work underscores the potential of the proposed model to enhance intention recognition in MI-EEG significantly.
期刊介绍:
An international multidisciplinary journal devoted to fundamental research in the brain sciences.
Brain Research publishes papers reporting interdisciplinary investigations of nervous system structure and function that are of general interest to the international community of neuroscientists. As is evident from the journals name, its scope is broad, ranging from cellular and molecular studies through systems neuroscience, cognition and disease. Invited reviews are also published; suggestions for and inquiries about potential reviews are welcomed.
With the appearance of the final issue of the 2011 subscription, Vol. 67/1-2 (24 June 2011), Brain Research Reviews has ceased publication as a distinct journal separate from Brain Research. Review articles accepted for Brain Research are now published in that journal.