{"title":"MEFD dataset and GCSFormer model : Cross-subject emotion recognition based on multimodal physiological signals.","authors":"Xiangyu Deng, Zhecong Fan, Wenbo Dong","doi":"10.1088/2057-1976/ae0e28","DOIUrl":null,"url":null,"abstract":"<p><p>Cross-subject emotion recognition is an important research direction in the fields of affective computing and brain-computer interfaces, aiming to identify the emotional states of different individuals through physiological signals such as functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG). Currently, most EEG-based emotion recognition datasets are unimodal or bimodal, which may overlook the emotional information reflected by other physiological signals of the subjects. In this paper, a multimodal dataset named Multimodal Emotion Four Category Dataset (MEFD) is constructed, which includes EEG, Heart Rate Variability (HRV), Electrooculogram (EOG), and Electrodermal Activity (EDA) data from 34 participants in four emotional states: sadness, happiness, fear, and calm. This will contribute to the development of multimodal emotion recognition research. To address the recognition difficulty caused by individual differences in cross-subject emotion recognition tasks, a classification model named Global Convolution Shifted Window Transformer (GCSFormer) composed of an EEG-Swin Convolution module and an improved Global Adaptive Transformer (GAT) module is proposed. By using a parallel network, the feature discrimination ability and generalization ability are enhanced. The model is applied to classify the EEG data in the self-built MEFD dataset, and the results are compared with those of mainstream methods. The experimental results show that the proposed EEG classification method achieves the best average accuracy of 85.36%, precision of 85.23%, recall of 86.35%, and F1 score of 84.52% in the cross-subject emotion recognition task. The excellent performance of GCSFormer in cross-subject emotion recognition task was verifie.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ae0e28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Cross-subject emotion recognition is an important research direction in the fields of affective computing and brain-computer interfaces, aiming to identify the emotional states of different individuals through physiological signals such as functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG). Currently, most EEG-based emotion recognition datasets are unimodal or bimodal, which may overlook the emotional information reflected by other physiological signals of the subjects. In this paper, a multimodal dataset named Multimodal Emotion Four Category Dataset (MEFD) is constructed, which includes EEG, Heart Rate Variability (HRV), Electrooculogram (EOG), and Electrodermal Activity (EDA) data from 34 participants in four emotional states: sadness, happiness, fear, and calm. This will contribute to the development of multimodal emotion recognition research. To address the recognition difficulty caused by individual differences in cross-subject emotion recognition tasks, a classification model named Global Convolution Shifted Window Transformer (GCSFormer) composed of an EEG-Swin Convolution module and an improved Global Adaptive Transformer (GAT) module is proposed. By using a parallel network, the feature discrimination ability and generalization ability are enhanced. The model is applied to classify the EEG data in the self-built MEFD dataset, and the results are compared with those of mainstream methods. The experimental results show that the proposed EEG classification method achieves the best average accuracy of 85.36%, precision of 85.23%, recall of 86.35%, and F1 score of 84.52% in the cross-subject emotion recognition task. The excellent performance of GCSFormer in cross-subject emotion recognition task was verifie.
期刊介绍:
BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.