{"title":"基于 ChannelMix 的变换器和卷积多视角特征融合网络,用于 EEG 情绪识别中的无监督域适应","authors":"Chengpeng Sun, Xiujuan Wang, Liubing Chen","doi":"10.1016/j.eswa.2025.127456","DOIUrl":null,"url":null,"abstract":"<div><div>Electroencephalogram (EEG)-based emotion recognition has become a focus of brain–computer interface research. However, differences in EEG signals across subjects can lead to poor generalization. Moreover, current approaches individually extract temporal and spatial information, resulting in inadequate feature fusion during feature extraction. This study develops a novel ChannelMix-based transformer and convolutional multi-view feature fusion network (CMTCF) to enhance cross-subject EEG emotion recognition. Specifically, a bi-directional fusion module based on a convolutional neural network (CNN)-Transformer structure is introduced to extract multi-view spatial feature and temporal feature, enabling the representation of rich spatiotemporal information. Subsequently, the ChannelMix module is designed to effectively establish an intermediate domain, facilitating the alignment of the target and source domains to reduce their discrepancies. Additionally, a soft pseudo-label module is implemented to enhance the discriminative power of target domain data within the feature space. To further improve generalization, a ChannelMix-based data augmentation method is utilized. Comprehensive experiments are conducted on the SEED, SEED-IV and SEED-VII benchmark datasets, achieving recognition accuracies of 93.80% (±4.96), 79.37% (±6.05) and 49.13% (±8.22), respectively, demonstrating that the CMTCF network achieves competitive results in cross-subject EEG emotion recognition tasks.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"280 ","pages":"Article 127456"},"PeriodicalIF":7.5000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChannelMix-based transformer and convolutional multi-view feature fusion network for unsupervised domain adaptation in EEG emotion recognition\",\"authors\":\"Chengpeng Sun, Xiujuan Wang, Liubing Chen\",\"doi\":\"10.1016/j.eswa.2025.127456\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Electroencephalogram (EEG)-based emotion recognition has become a focus of brain–computer interface research. However, differences in EEG signals across subjects can lead to poor generalization. Moreover, current approaches individually extract temporal and spatial information, resulting in inadequate feature fusion during feature extraction. This study develops a novel ChannelMix-based transformer and convolutional multi-view feature fusion network (CMTCF) to enhance cross-subject EEG emotion recognition. Specifically, a bi-directional fusion module based on a convolutional neural network (CNN)-Transformer structure is introduced to extract multi-view spatial feature and temporal feature, enabling the representation of rich spatiotemporal information. Subsequently, the ChannelMix module is designed to effectively establish an intermediate domain, facilitating the alignment of the target and source domains to reduce their discrepancies. Additionally, a soft pseudo-label module is implemented to enhance the discriminative power of target domain data within the feature space. To further improve generalization, a ChannelMix-based data augmentation method is utilized. Comprehensive experiments are conducted on the SEED, SEED-IV and SEED-VII benchmark datasets, achieving recognition accuracies of 93.80% (±4.96), 79.37% (±6.05) and 49.13% (±8.22), respectively, demonstrating that the CMTCF network achieves competitive results in cross-subject EEG emotion recognition tasks.</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":\"280 \",\"pages\":\"Article 127456\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417425010784\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425010784","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ChannelMix-based transformer and convolutional multi-view feature fusion network for unsupervised domain adaptation in EEG emotion recognition
Electroencephalogram (EEG)-based emotion recognition has become a focus of brain–computer interface research. However, differences in EEG signals across subjects can lead to poor generalization. Moreover, current approaches individually extract temporal and spatial information, resulting in inadequate feature fusion during feature extraction. This study develops a novel ChannelMix-based transformer and convolutional multi-view feature fusion network (CMTCF) to enhance cross-subject EEG emotion recognition. Specifically, a bi-directional fusion module based on a convolutional neural network (CNN)-Transformer structure is introduced to extract multi-view spatial feature and temporal feature, enabling the representation of rich spatiotemporal information. Subsequently, the ChannelMix module is designed to effectively establish an intermediate domain, facilitating the alignment of the target and source domains to reduce their discrepancies. Additionally, a soft pseudo-label module is implemented to enhance the discriminative power of target domain data within the feature space. To further improve generalization, a ChannelMix-based data augmentation method is utilized. Comprehensive experiments are conducted on the SEED, SEED-IV and SEED-VII benchmark datasets, achieving recognition accuracies of 93.80% (±4.96), 79.37% (±6.05) and 49.13% (±8.22), respectively, demonstrating that the CMTCF network achieves competitive results in cross-subject EEG emotion recognition tasks.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.