SAD-VER: A Self-supervised, Diffusion probabilistic model-based data augmentation framework for Visual-stimulus EEG Recognition

IF 8 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Junjie Huang, Mingyang Li, Wanzhong Chen
{"title":"SAD-VER: A Self-supervised, Diffusion probabilistic model-based data augmentation framework for Visual-stimulus EEG Recognition","authors":"Junjie Huang,&nbsp;Mingyang Li,&nbsp;Wanzhong Chen","doi":"10.1016/j.aei.2025.103298","DOIUrl":null,"url":null,"abstract":"<div><div>The decoding of EEG-based visual stimuli has become a major and important topic in the field of Brain–Computer Interfaces (BCI) research. However, there is a problem of EEG data scarcity in visual stimulus EEG decoding research, making it difficult to establish effective and stable deep learning models. Therefore, in this paper we propose a novel data augmentation framework: the Self-supervised, Adaptive variance Diffusion probabilistic model-based Visual-stimulus EEG Augmentation Framework (SAD-VER), for enhancing and recognizing visual stimulus EEG data. As the first to introduce diffusion model to EEG-based visual stimulus research, the generating process of SAD-VER is composed of a well-designed diffusion model to generate high-quality and diverse EEG samples. Additionally, this process is self-optimized with a Bayesian method-based hyperparameter optimizer to maximize the quality of the generated EEG samples in a self-supervised manner. A modified convolutional network is also utilized for quality analysis and decoding of augmented EEG. Experimental results demonstrate that the proposed SAD-VER can improve the decoding accuracy of existing models by generating high-quality EEG samples, and achieve the state-of-the-art performance in various visual stimulus EEG decoding tasks. Further analysis indicates that EEG generated by SAD-VER enhances the separability of features between different categories, and contributes to locating crucial brain region information. Code of this research is available at: <span><span>https://github.com/yellow006/SAD-VER</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103298"},"PeriodicalIF":8.0000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625001910","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The decoding of EEG-based visual stimuli has become a major and important topic in the field of Brain–Computer Interfaces (BCI) research. However, there is a problem of EEG data scarcity in visual stimulus EEG decoding research, making it difficult to establish effective and stable deep learning models. Therefore, in this paper we propose a novel data augmentation framework: the Self-supervised, Adaptive variance Diffusion probabilistic model-based Visual-stimulus EEG Augmentation Framework (SAD-VER), for enhancing and recognizing visual stimulus EEG data. As the first to introduce diffusion model to EEG-based visual stimulus research, the generating process of SAD-VER is composed of a well-designed diffusion model to generate high-quality and diverse EEG samples. Additionally, this process is self-optimized with a Bayesian method-based hyperparameter optimizer to maximize the quality of the generated EEG samples in a self-supervised manner. A modified convolutional network is also utilized for quality analysis and decoding of augmented EEG. Experimental results demonstrate that the proposed SAD-VER can improve the decoding accuracy of existing models by generating high-quality EEG samples, and achieve the state-of-the-art performance in various visual stimulus EEG decoding tasks. Further analysis indicates that EEG generated by SAD-VER enhances the separability of features between different categories, and contributes to locating crucial brain region information. Code of this research is available at: https://github.com/yellow006/SAD-VER.
基于自监督、扩散概率模型的视觉刺激脑电识别数据增强框架
基于脑电图的视觉刺激解码已成为脑机接口(BCI)研究领域的一个重要课题。然而,在视觉刺激脑电解码研究中存在着脑电数据稀缺的问题,难以建立有效、稳定的深度学习模型。为此,本文提出了一种新的数据增强框架:基于自监督、自适应方差扩散概率模型的视觉刺激脑电增强框架(SAD-VER),用于增强和识别视觉刺激脑电数据。作为第一个将扩散模型引入到基于脑电图的视觉刺激研究中,SAD-VER的生成过程由精心设计的扩散模型组成,生成高质量、多样化的脑电图样本。此外,该过程使用基于贝叶斯方法的超参数优化器进行自优化,以自监督的方式最大化生成的EEG样本的质量。改进的卷积网络也被用于增强脑电图的质量分析和解码。实验结果表明,该方法能够生成高质量的脑电信号样本,提高现有模型的解码精度,在各种视觉刺激脑电信号解码任务中达到最先进的性能。进一步分析表明,基于SAD-VER生成的EEG增强了不同类别之间特征的可分离性,有助于定位关键脑区信息。这项研究的代码可在:https://github.com/yellow006/SAD-VER。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Advanced Engineering Informatics
Advanced Engineering Informatics 工程技术-工程:综合
CiteScore
12.40
自引率
18.20%
发文量
292
审稿时长
45 days
期刊介绍: Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信