{"title":"MSBiLSTM-Attention: EEG Emotion Recognition Model Based on Spatiotemporal Feature Fusion.","authors":"Yahong Ma, Zhentao Huang, Yuyao Yang, Zuowen Chen, Qi Dong, Shanwen Zhang, Yuan Li","doi":"10.3390/biomimetics10030178","DOIUrl":null,"url":null,"abstract":"<p><p>Emotional states play a crucial role in shaping decision-making and social interactions, with sentiment analysis becoming an essential technology in human-computer emotional engagement, garnering increasing interest in artificial intelligence research. In EEG-based emotion analysis, the main challenges are feature extraction and classifier design, making the extraction of spatiotemporal information from EEG signals vital for effective emotion classification. Current methods largely depend on machine learning with manual feature extraction, while deep learning offers the advantage of automatic feature extraction and classification. Nonetheless, many deep learning approaches still necessitate manual preprocessing, which hampers accuracy and convenience. This paper introduces a novel deep learning technique that integrates multi-scale convolution and bidirectional long short-term memory networks with an attention mechanism for automatic EEG feature extraction and classification. By using raw EEG data, the method applies multi-scale convolutional neural networks and bidirectional long short-term memory networks to extract and merge features, selects key features via an attention mechanism, and classifies emotional EEG signals through a fully connected layer. The proposed model was evaluated on the SEED dataset for emotion classification. Experimental results demonstrate that this method effectively classifies EEG-based emotions, achieving classification accuracies of 99.44% for the three-class task and 99.85% for the four-class task in single validation, with average 10-fold-cross-validation accuracies of 99.49% and 99.70%, respectively. These findings suggest that the MSBiLSTM-Attention model is a powerful approach for emotion recognition.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 3","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11940598/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10030178","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Emotional states play a crucial role in shaping decision-making and social interactions, with sentiment analysis becoming an essential technology in human-computer emotional engagement, garnering increasing interest in artificial intelligence research. In EEG-based emotion analysis, the main challenges are feature extraction and classifier design, making the extraction of spatiotemporal information from EEG signals vital for effective emotion classification. Current methods largely depend on machine learning with manual feature extraction, while deep learning offers the advantage of automatic feature extraction and classification. Nonetheless, many deep learning approaches still necessitate manual preprocessing, which hampers accuracy and convenience. This paper introduces a novel deep learning technique that integrates multi-scale convolution and bidirectional long short-term memory networks with an attention mechanism for automatic EEG feature extraction and classification. By using raw EEG data, the method applies multi-scale convolutional neural networks and bidirectional long short-term memory networks to extract and merge features, selects key features via an attention mechanism, and classifies emotional EEG signals through a fully connected layer. The proposed model was evaluated on the SEED dataset for emotion classification. Experimental results demonstrate that this method effectively classifies EEG-based emotions, achieving classification accuracies of 99.44% for the three-class task and 99.85% for the four-class task in single validation, with average 10-fold-cross-validation accuracies of 99.49% and 99.70%, respectively. These findings suggest that the MSBiLSTM-Attention model is a powerful approach for emotion recognition.