{"title":"基于类重构的两阶段脑电零射分类算法。","authors":"Li Li, Baofa Wei","doi":"10.1088/1741-2552/adeaea","DOIUrl":null,"url":null,"abstract":"<p><p>Researchers have long been dedicated to decoding human visual representations from neural signals. These studies are crucial in uncovering the mechanisms of visual processing in the human brain. Electroencephalogram(EEG) signals have garnered widespread attention recently due to their noninvasive nature and low cost. EEG classification is one of the most popular topics in brain-computer interface(BCI) research. However, most traditional EEG classification algorithms are difficult to generalize to unseen classes that were not involved in the training phase. The main objective of this work is to improve the performance of these EEG classification algorithms for unseen classes. In this work, we propose a two-stage zero-shot EEG classification algorithm guided by class reconstruction. The method is specifically designed with a two-stage training strategy based on class reconstruction. This structure and training strategy enable the model to thoroughly learn the relations and distinctions among EEG embeddings of different classes. The Contrastive Language-Image Pre-training(CLIP) model has a well-aligned latent space and powerful cross-modality generalization ability. The method bridges the modality gap between EEG, images, and text using CLIP features. It significantly improves the model's performance in unseen classes. We conducted experiments on the ImageStimulus-EEG dataset to evaluate the performance of the proposed method. Meanwhile, it was compared with the state-of-the-art model and the baseline model. The experimental results demonstrate that our model achieves superior performance in among Top-1, Top-3, and Top-5 classification accuracy for a 50-way zero-shot classification task, reaching 17.77%, 38.76% and 54.75%, respectively. These results further validate the effectiveness of the proposed method in EEG zero-shot classification.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A two-stage EEG zero-shot classification algorithm guided by class reconstruction.\",\"authors\":\"Li Li, Baofa Wei\",\"doi\":\"10.1088/1741-2552/adeaea\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Researchers have long been dedicated to decoding human visual representations from neural signals. These studies are crucial in uncovering the mechanisms of visual processing in the human brain. Electroencephalogram(EEG) signals have garnered widespread attention recently due to their noninvasive nature and low cost. EEG classification is one of the most popular topics in brain-computer interface(BCI) research. However, most traditional EEG classification algorithms are difficult to generalize to unseen classes that were not involved in the training phase. The main objective of this work is to improve the performance of these EEG classification algorithms for unseen classes. In this work, we propose a two-stage zero-shot EEG classification algorithm guided by class reconstruction. The method is specifically designed with a two-stage training strategy based on class reconstruction. This structure and training strategy enable the model to thoroughly learn the relations and distinctions among EEG embeddings of different classes. The Contrastive Language-Image Pre-training(CLIP) model has a well-aligned latent space and powerful cross-modality generalization ability. The method bridges the modality gap between EEG, images, and text using CLIP features. It significantly improves the model's performance in unseen classes. We conducted experiments on the ImageStimulus-EEG dataset to evaluate the performance of the proposed method. Meanwhile, it was compared with the state-of-the-art model and the baseline model. The experimental results demonstrate that our model achieves superior performance in among Top-1, Top-3, and Top-5 classification accuracy for a 50-way zero-shot classification task, reaching 17.77%, 38.76% and 54.75%, respectively. These results further validate the effectiveness of the proposed method in EEG zero-shot classification.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/adeaea\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/adeaea","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A two-stage EEG zero-shot classification algorithm guided by class reconstruction.
Researchers have long been dedicated to decoding human visual representations from neural signals. These studies are crucial in uncovering the mechanisms of visual processing in the human brain. Electroencephalogram(EEG) signals have garnered widespread attention recently due to their noninvasive nature and low cost. EEG classification is one of the most popular topics in brain-computer interface(BCI) research. However, most traditional EEG classification algorithms are difficult to generalize to unseen classes that were not involved in the training phase. The main objective of this work is to improve the performance of these EEG classification algorithms for unseen classes. In this work, we propose a two-stage zero-shot EEG classification algorithm guided by class reconstruction. The method is specifically designed with a two-stage training strategy based on class reconstruction. This structure and training strategy enable the model to thoroughly learn the relations and distinctions among EEG embeddings of different classes. The Contrastive Language-Image Pre-training(CLIP) model has a well-aligned latent space and powerful cross-modality generalization ability. The method bridges the modality gap between EEG, images, and text using CLIP features. It significantly improves the model's performance in unseen classes. We conducted experiments on the ImageStimulus-EEG dataset to evaluate the performance of the proposed method. Meanwhile, it was compared with the state-of-the-art model and the baseline model. The experimental results demonstrate that our model achieves superior performance in among Top-1, Top-3, and Top-5 classification accuracy for a 50-way zero-shot classification task, reaching 17.77%, 38.76% and 54.75%, respectively. These results further validate the effectiveness of the proposed method in EEG zero-shot classification.