{"title":"Universal semantic feature extraction from EEG signals: a task-independent framework.","authors":"Hossein Ahmadi, Luca Mesin","doi":"10.1088/1741-2552/add08f","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Extracting universal, task-independent semantic features from electroencephalography (EEG) signals remains an open challenge. Traditional approaches are often task-specific, limiting their generalization across different EEG paradigms. This study aims to develop a robust, unsupervised framework for learning high-level, task-independent neural representations.<i>Approach.</i>We propose a novel framework integrating convolutional neural networks, AutoEncoders, and Transformers to extract both low-level spatiotemporal patterns and high-level semantic features from EEG signals. The model is trained in an unsupervised manner to ensure adaptability across diverse EEG paradigms, including motor imagery (MI), steady-state visually evoked potentials (SSVEPs), and event-related potentials (ERPs, specifically P300). Extensive analyses, including clustering, correlation, and ablation studies, are conducted to validate the quality and interpretability of the extracted features.<i>Main results.</i>Our method achieves state-of-the-art performance, with average classification accuracies of 83.50% and 84.84% on MI datasets (BCICIV_2a and BCICIV_2b), 98.41% and 99.66% on SSVEP datasets (Lee2019-SSVEP and Nakanishi2015), and an average AUC of 91.80% across eight ERP datasets. t-distributed stochastic neighbor embedding and clustering analyses reveal that the extracted features exhibit enhanced separability and structure compared to raw EEG data. Correlation studies confirm the framework's ability to balance universal and subject-specific features, while ablation results highlight the near-optimality of the selected model configuration.<i>Significance.</i>This work establishes a universal framework for task-independent semantic feature extraction from EEG signals, bridging the gap between conventional feature engineering and modern deep learning methods. By providing robust, generalizable representations across diverse EEG paradigms, this approach lays the foundation for advanced brain-computer interface applications, cross-task EEG analysis, and future developments in semantic EEG processing.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"22 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/add08f","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective.Extracting universal, task-independent semantic features from electroencephalography (EEG) signals remains an open challenge. Traditional approaches are often task-specific, limiting their generalization across different EEG paradigms. This study aims to develop a robust, unsupervised framework for learning high-level, task-independent neural representations.Approach.We propose a novel framework integrating convolutional neural networks, AutoEncoders, and Transformers to extract both low-level spatiotemporal patterns and high-level semantic features from EEG signals. The model is trained in an unsupervised manner to ensure adaptability across diverse EEG paradigms, including motor imagery (MI), steady-state visually evoked potentials (SSVEPs), and event-related potentials (ERPs, specifically P300). Extensive analyses, including clustering, correlation, and ablation studies, are conducted to validate the quality and interpretability of the extracted features.Main results.Our method achieves state-of-the-art performance, with average classification accuracies of 83.50% and 84.84% on MI datasets (BCICIV_2a and BCICIV_2b), 98.41% and 99.66% on SSVEP datasets (Lee2019-SSVEP and Nakanishi2015), and an average AUC of 91.80% across eight ERP datasets. t-distributed stochastic neighbor embedding and clustering analyses reveal that the extracted features exhibit enhanced separability and structure compared to raw EEG data. Correlation studies confirm the framework's ability to balance universal and subject-specific features, while ablation results highlight the near-optimality of the selected model configuration.Significance.This work establishes a universal framework for task-independent semantic feature extraction from EEG signals, bridging the gap between conventional feature engineering and modern deep learning methods. By providing robust, generalizable representations across diverse EEG paradigms, this approach lays the foundation for advanced brain-computer interface applications, cross-task EEG analysis, and future developments in semantic EEG processing.