Augmenting Electroencephalogram Transformer for Steady-State Visually Evoked Potential-Based Brain-Computer Interfaces.

IF 18.1 Q1 ENGINEERING, BIOMEDICAL
Cyborg and bionic systems (Washington, D.C.) Pub Date : 2025-10-07 eCollection Date: 2025-01-01 DOI:10.34133/cbsystems.0379
Jin Yue, Xiaolin Xiao, Kun Wang, Weibo Yi, Tzyy-Ping Jung, Minpeng Xu, Dong Ming
{"title":"Augmenting Electroencephalogram Transformer for Steady-State Visually Evoked Potential-Based Brain-Computer Interfaces.","authors":"Jin Yue, Xiaolin Xiao, Kun Wang, Weibo Yi, Tzyy-Ping Jung, Minpeng Xu, Dong Ming","doi":"10.34133/cbsystems.0379","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective:</b> Advancing high-speed steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) systems requires effective electroencephalogram (EEG) decoding through deep learning. However, challenges persist due to data sparsity and the unclear neural basis of most augmentation techniques. Furthermore, effective processing of dynamic EEG signals and accommodating augmented data require a more sophisticated model tailored to the unique characteristics of EEG signals. <b>Approach:</b> This study introduces background EEG mixing (BGMix), a novel data augmentation technique grounded in neural principles that enhances training samples by replacing background noise between different classes. Building on this, we propose the augment EEG Transformer (AETF), a Transformer-based model designed to capture the temporal, spatial, and frequential features of EEG signals, leveraging the advantages of Transformer architectures. <b>Main results:</b> Experimental evaluations of 2 publicly available SSVEP datasets show the efficacy of the BGMix strategy and the AETF model. The BGMix approach notably improved the average classification accuracy of 4 distinct deep learning models, with increases ranging from 11.06% to 21.39% and 4.81% to 25.17% in the respective datasets. Furthermore, the AETF model outperformed state-of-the-art baseline models, excelling with short training data lengths and achieving the highest information transfer rates (ITRs) of 205.82 ± 15.81 bits/min and 240.03 ± 14.91 bits/min on the 2 datasets. <b>Significance:</b> This study introduces a novel EEG augmentation method and a new approach to designing deep learning models informed by the neural processes of EEG. These innovations significantly improve the performance and practicality of high-speed SSVEP-based BCI systems.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"6 ","pages":"0379"},"PeriodicalIF":18.1000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12501431/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyborg and bionic systems (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/cbsystems.0379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Advancing high-speed steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) systems requires effective electroencephalogram (EEG) decoding through deep learning. However, challenges persist due to data sparsity and the unclear neural basis of most augmentation techniques. Furthermore, effective processing of dynamic EEG signals and accommodating augmented data require a more sophisticated model tailored to the unique characteristics of EEG signals. Approach: This study introduces background EEG mixing (BGMix), a novel data augmentation technique grounded in neural principles that enhances training samples by replacing background noise between different classes. Building on this, we propose the augment EEG Transformer (AETF), a Transformer-based model designed to capture the temporal, spatial, and frequential features of EEG signals, leveraging the advantages of Transformer architectures. Main results: Experimental evaluations of 2 publicly available SSVEP datasets show the efficacy of the BGMix strategy and the AETF model. The BGMix approach notably improved the average classification accuracy of 4 distinct deep learning models, with increases ranging from 11.06% to 21.39% and 4.81% to 25.17% in the respective datasets. Furthermore, the AETF model outperformed state-of-the-art baseline models, excelling with short training data lengths and achieving the highest information transfer rates (ITRs) of 205.82 ± 15.81 bits/min and 240.03 ± 14.91 bits/min on the 2 datasets. Significance: This study introduces a novel EEG augmentation method and a new approach to designing deep learning models informed by the neural processes of EEG. These innovations significantly improve the performance and practicality of high-speed SSVEP-based BCI systems.

基于视觉诱发电位稳态脑机接口的增强型脑电图变压器。
目的:推进基于高速稳态视觉诱发电位(SSVEP)的脑机接口(BCI)系统,需要通过深度学习对脑电图(EEG)进行有效解码。然而,由于数据稀疏性和大多数增强技术的神经基础不明确,挑战仍然存在。此外,有效处理动态脑电信号和适应增强数据需要更复杂的模型,以适应脑电信号的独特特征。方法:本研究引入背景脑电混合(BGMix),这是一种基于神经学原理的新型数据增强技术,通过替换不同类别之间的背景噪声来增强训练样本。在此基础上,我们提出了增强脑电图变压器(AETF),这是一种基于变压器的模型,旨在捕捉脑电图信号的时间、空间和频率特征,利用变压器架构的优势。主要结果:2个公开可用的SSVEP数据集的实验评估显示了BGMix策略和AETF模型的有效性。BGMix方法显著提高了4种不同深度学习模型的平均分类准确率,在各自的数据集上分别提高了11.06% ~ 21.39%和4.81% ~ 25.17%。此外,AETF模型优于最先进的基线模型,在较短的训练数据长度上表现出色,在2个数据集上实现了最高的信息传输速率(ITRs),分别为205.82±15.81 bits/min和240.03±14.91 bits/min。意义:本研究提出了一种新的脑电增强方法和一种基于脑电神经过程的深度学习模型设计新方法。这些创新显著提高了基于ssvep的高速BCI系统的性能和实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
审稿时长
21 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信