{"title":"A parallel neural networks for emotion recognition based on EEG signals","authors":"","doi":"10.1016/j.neucom.2024.128624","DOIUrl":null,"url":null,"abstract":"<div><p>Our study proposes a novel Parallel Temporal–Spatial-Frequency Neural Network (PTSFNN) for emotion recognition. The network processes EEG signals in the time, frequency, and spatial domains simultaneously to extract discriminative features. Despite its relatively simple architecture, the proposed model achieves superior performance. Specifically, PTSFNN first applies wavelet transform to the raw EEG signals and then reconstructs the coefficients based on frequency hierarchy, thereby achieving frequency decomposition. Subsequently, the core part of the network performs three independent parallel convolution operations on the decomposed signals, including a novel graph convolutional network. Finally, an attention mechanism-based post-processing operation is designed to effectively enhance feature representation. The features obtained from the three modules are concatenated for classification, with the cross-entropy loss function being adopted. To evaluate the model’s performance, extensive experiments are conducted on the SEED and SEED-IV public datasets. The experimental results demonstrate that PTSFNN achieves excellent performance in emotion recognition tasks, with classification accuracies of 87.63% and 74.96%, respectively. Comparative experiments with previous state-of-the-art methods confirm the superiority of our proposed model, which can efficiently extract emotion information from EEG signals.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122401395X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Our study proposes a novel Parallel Temporal–Spatial-Frequency Neural Network (PTSFNN) for emotion recognition. The network processes EEG signals in the time, frequency, and spatial domains simultaneously to extract discriminative features. Despite its relatively simple architecture, the proposed model achieves superior performance. Specifically, PTSFNN first applies wavelet transform to the raw EEG signals and then reconstructs the coefficients based on frequency hierarchy, thereby achieving frequency decomposition. Subsequently, the core part of the network performs three independent parallel convolution operations on the decomposed signals, including a novel graph convolutional network. Finally, an attention mechanism-based post-processing operation is designed to effectively enhance feature representation. The features obtained from the three modules are concatenated for classification, with the cross-entropy loss function being adopted. To evaluate the model’s performance, extensive experiments are conducted on the SEED and SEED-IV public datasets. The experimental results demonstrate that PTSFNN achieves excellent performance in emotion recognition tasks, with classification accuracies of 87.63% and 74.96%, respectively. Comparative experiments with previous state-of-the-art methods confirm the superiority of our proposed model, which can efficiently extract emotion information from EEG signals.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.