Jin Yue, Xiaolin Xiao, Kun Wang, Weibo Yi, Tzyy-Ping Jung, Minpeng Xu, Dong Ming
{"title":"Augmenting Electroencephalogram Transformer for Steady-State Visually Evoked Potential-Based Brain-Computer Interfaces.","authors":"Jin Yue, Xiaolin Xiao, Kun Wang, Weibo Yi, Tzyy-Ping Jung, Minpeng Xu, Dong Ming","doi":"10.34133/cbsystems.0379","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective:</b> Advancing high-speed steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) systems requires effective electroencephalogram (EEG) decoding through deep learning. However, challenges persist due to data sparsity and the unclear neural basis of most augmentation techniques. Furthermore, effective processing of dynamic EEG signals and accommodating augmented data require a more sophisticated model tailored to the unique characteristics of EEG signals. <b>Approach:</b> This study introduces background EEG mixing (BGMix), a novel data augmentation technique grounded in neural principles that enhances training samples by replacing background noise between different classes. Building on this, we propose the augment EEG Transformer (AETF), a Transformer-based model designed to capture the temporal, spatial, and frequential features of EEG signals, leveraging the advantages of Transformer architectures. <b>Main results:</b> Experimental evaluations of 2 publicly available SSVEP datasets show the efficacy of the BGMix strategy and the AETF model. The BGMix approach notably improved the average classification accuracy of 4 distinct deep learning models, with increases ranging from 11.06% to 21.39% and 4.81% to 25.17% in the respective datasets. Furthermore, the AETF model outperformed state-of-the-art baseline models, excelling with short training data lengths and achieving the highest information transfer rates (ITRs) of 205.82 ± 15.81 bits/min and 240.03 ± 14.91 bits/min on the 2 datasets. <b>Significance:</b> This study introduces a novel EEG augmentation method and a new approach to designing deep learning models informed by the neural processes of EEG. These innovations significantly improve the performance and practicality of high-speed SSVEP-based BCI systems.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"6 ","pages":"0379"},"PeriodicalIF":18.1000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12501431/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyborg and bionic systems (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/cbsystems.0379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: Advancing high-speed steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) systems requires effective electroencephalogram (EEG) decoding through deep learning. However, challenges persist due to data sparsity and the unclear neural basis of most augmentation techniques. Furthermore, effective processing of dynamic EEG signals and accommodating augmented data require a more sophisticated model tailored to the unique characteristics of EEG signals. Approach: This study introduces background EEG mixing (BGMix), a novel data augmentation technique grounded in neural principles that enhances training samples by replacing background noise between different classes. Building on this, we propose the augment EEG Transformer (AETF), a Transformer-based model designed to capture the temporal, spatial, and frequential features of EEG signals, leveraging the advantages of Transformer architectures. Main results: Experimental evaluations of 2 publicly available SSVEP datasets show the efficacy of the BGMix strategy and the AETF model. The BGMix approach notably improved the average classification accuracy of 4 distinct deep learning models, with increases ranging from 11.06% to 21.39% and 4.81% to 25.17% in the respective datasets. Furthermore, the AETF model outperformed state-of-the-art baseline models, excelling with short training data lengths and achieving the highest information transfer rates (ITRs) of 205.82 ± 15.81 bits/min and 240.03 ± 14.91 bits/min on the 2 datasets. Significance: This study introduces a novel EEG augmentation method and a new approach to designing deep learning models informed by the neural processes of EEG. These innovations significantly improve the performance and practicality of high-speed SSVEP-based BCI systems.