Chenyu Pan, Huimin Lu, Chenglin Lin, Zeyi Zhong, Bing Liu
{"title":"Set-pMAE: spatial-spEctral-temporal based parallel masked autoEncoder for EEG emotion recognition","authors":"Chenyu Pan, Huimin Lu, Chenglin Lin, Zeyi Zhong, Bing Liu","doi":"10.1007/s11571-024-10162-5","DOIUrl":null,"url":null,"abstract":"<p>The utilization of Electroencephalography (EEG) for emotion recognition has emerged as the primary tool in the field of affective computing. Traditional supervised learning methods are typically constrained by the availability of labeled data, which can result in weak generalizability of learned features. Additionally, EEG signals are highly correlated with human emotional states across temporal, spatial, and spectral dimensions. In this paper, we propose a Spatial-spEctral-Temporal based parallel Masked Autoencoder (SET-pMAE) model for EEG emotion recognition. SET-pMAE learns generic representations of spatial-temporal features and spatial-spectral features through a dual-branch self-supervised task. The reconstruction task of the spatial-temporal branch aims to capture the spatial-temporal contextual dependencies of EEG signals, while the reconstruction task of the spatial-spectral branch focuses on capturing the intrinsic spatial associations of the spectral domain across different brain regions. By learning from both tasks simultaneously, SET-pMAE can capture the generalized representations of features from the both tasks, thereby reducing the risk of overfitting. In order to verify the effectiveness of the proposed model, a series of experiments are conducted on the DEAP and DREAMER datasets. Results from experiments reveal that by employing self-supervised learning, the proposed model effectively captures more discriminative and generalized features, thereby attaining excellent performance.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-024-10162-5","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
The utilization of Electroencephalography (EEG) for emotion recognition has emerged as the primary tool in the field of affective computing. Traditional supervised learning methods are typically constrained by the availability of labeled data, which can result in weak generalizability of learned features. Additionally, EEG signals are highly correlated with human emotional states across temporal, spatial, and spectral dimensions. In this paper, we propose a Spatial-spEctral-Temporal based parallel Masked Autoencoder (SET-pMAE) model for EEG emotion recognition. SET-pMAE learns generic representations of spatial-temporal features and spatial-spectral features through a dual-branch self-supervised task. The reconstruction task of the spatial-temporal branch aims to capture the spatial-temporal contextual dependencies of EEG signals, while the reconstruction task of the spatial-spectral branch focuses on capturing the intrinsic spatial associations of the spectral domain across different brain regions. By learning from both tasks simultaneously, SET-pMAE can capture the generalized representations of features from the both tasks, thereby reducing the risk of overfitting. In order to verify the effectiveness of the proposed model, a series of experiments are conducted on the DEAP and DREAMER datasets. Results from experiments reveal that by employing self-supervised learning, the proposed model effectively captures more discriminative and generalized features, thereby attaining excellent performance.
期刊介绍:
Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models.
The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome.
The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged.
1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics.
2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages.
3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.