Yanling An , Shaohai Hu , Shuaiqi Liu , Xinrui Wang , Zhihui Gu , Yudong Zhang
{"title":"LGDAAN-Nets:一种用于EEG情绪识别的局部和全局域对抗注意神经网络","authors":"Yanling An , Shaohai Hu , Shuaiqi Liu , Xinrui Wang , Zhihui Gu , Yudong Zhang","doi":"10.1016/j.knosys.2025.113613","DOIUrl":null,"url":null,"abstract":"<div><div>Extensive research is being conducted worldwide on emotion recognition, which is a crucial technology in affective computing. Electroencephalogram (EEG) signals are widely employed in emotion recognition owing to their ease of discernibility and high accuracy. Effectively harnessing the spatial-temporal-spectral features of EEG signals is essential for realizing accurate emotion classification due to their low signal-to-noise ratio. In this study, we proposed an EEG emotion recognition algorithm based on local and global domain adversarial attention neural networks, called LGDAAN-Nets, to address the problems of cross-subject EEG emotion recognition. Firstly, we constructed a ConvLSTM block with residual structures as a spatial-temporal-spectral feature to fully exploit the temporal relationship, spatial structure, and spectral information of the input spatial-temporal matrix and spatial-spectral matrix in the network. We then introduced a self-attention module as a supplementary component to the feature extractor, which integrates the long-range and multilevel dependencies of the cross-modal emotion features. This facilitates the learning of complementary information from different feature patterns and enhances the emotion recognition capability of the model. Lastly, we built a local-global domain discriminator using two local domain discriminators that reduce the distribution differences under different feature patterns and capture the locally invariant features of the EEG signals. The global domain discriminator minimizes the global differences in the fused features between the source and target domains, which improves the robustness and generalization performance of the model. The proposed method was comprehensively tested on the SEED, SEED-IV, and DEAP datasets and demonstrated superior performance over most existing emotion recognition methods. Additionally, experiments were also conducted on a self-collected EEG-based emotion dataset that included 20 subjects, which further validated the proposed model's performance in cross-dataset emotion recognition. The source code is available at: <span><span>https://github.com/cvmdsp/LGDAAN-Nets</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"318 ","pages":"Article 113613"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LGDAAN-Nets: A local and global domain adversarial attention neural networks for EEG emotion recognition\",\"authors\":\"Yanling An , Shaohai Hu , Shuaiqi Liu , Xinrui Wang , Zhihui Gu , Yudong Zhang\",\"doi\":\"10.1016/j.knosys.2025.113613\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Extensive research is being conducted worldwide on emotion recognition, which is a crucial technology in affective computing. Electroencephalogram (EEG) signals are widely employed in emotion recognition owing to their ease of discernibility and high accuracy. Effectively harnessing the spatial-temporal-spectral features of EEG signals is essential for realizing accurate emotion classification due to their low signal-to-noise ratio. In this study, we proposed an EEG emotion recognition algorithm based on local and global domain adversarial attention neural networks, called LGDAAN-Nets, to address the problems of cross-subject EEG emotion recognition. Firstly, we constructed a ConvLSTM block with residual structures as a spatial-temporal-spectral feature to fully exploit the temporal relationship, spatial structure, and spectral information of the input spatial-temporal matrix and spatial-spectral matrix in the network. We then introduced a self-attention module as a supplementary component to the feature extractor, which integrates the long-range and multilevel dependencies of the cross-modal emotion features. This facilitates the learning of complementary information from different feature patterns and enhances the emotion recognition capability of the model. Lastly, we built a local-global domain discriminator using two local domain discriminators that reduce the distribution differences under different feature patterns and capture the locally invariant features of the EEG signals. The global domain discriminator minimizes the global differences in the fused features between the source and target domains, which improves the robustness and generalization performance of the model. The proposed method was comprehensively tested on the SEED, SEED-IV, and DEAP datasets and demonstrated superior performance over most existing emotion recognition methods. Additionally, experiments were also conducted on a self-collected EEG-based emotion dataset that included 20 subjects, which further validated the proposed model's performance in cross-dataset emotion recognition. The source code is available at: <span><span>https://github.com/cvmdsp/LGDAAN-Nets</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"318 \",\"pages\":\"Article 113613\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125006598\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125006598","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
LGDAAN-Nets: A local and global domain adversarial attention neural networks for EEG emotion recognition
Extensive research is being conducted worldwide on emotion recognition, which is a crucial technology in affective computing. Electroencephalogram (EEG) signals are widely employed in emotion recognition owing to their ease of discernibility and high accuracy. Effectively harnessing the spatial-temporal-spectral features of EEG signals is essential for realizing accurate emotion classification due to their low signal-to-noise ratio. In this study, we proposed an EEG emotion recognition algorithm based on local and global domain adversarial attention neural networks, called LGDAAN-Nets, to address the problems of cross-subject EEG emotion recognition. Firstly, we constructed a ConvLSTM block with residual structures as a spatial-temporal-spectral feature to fully exploit the temporal relationship, spatial structure, and spectral information of the input spatial-temporal matrix and spatial-spectral matrix in the network. We then introduced a self-attention module as a supplementary component to the feature extractor, which integrates the long-range and multilevel dependencies of the cross-modal emotion features. This facilitates the learning of complementary information from different feature patterns and enhances the emotion recognition capability of the model. Lastly, we built a local-global domain discriminator using two local domain discriminators that reduce the distribution differences under different feature patterns and capture the locally invariant features of the EEG signals. The global domain discriminator minimizes the global differences in the fused features between the source and target domains, which improves the robustness and generalization performance of the model. The proposed method was comprehensively tested on the SEED, SEED-IV, and DEAP datasets and demonstrated superior performance over most existing emotion recognition methods. Additionally, experiments were also conducted on a self-collected EEG-based emotion dataset that included 20 subjects, which further validated the proposed model's performance in cross-dataset emotion recognition. The source code is available at: https://github.com/cvmdsp/LGDAAN-Nets.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.