基于时频特征融合的SSVEP频率识别深度学习网络。

IF 3.2 3区 医学 Q2 NEUROSCIENCES
Frontiers in Neuroscience Pub Date : 2025-09-29 eCollection Date: 2025-01-01 DOI:10.3389/fnins.2025.1679451
Yiwei Dai, Zhengkui Chen, Tian-Ao Cao, Hongyou Zhou, Min Fang, Yanyun Dai, Lurong Jiang, Jijun Tong
{"title":"基于时频特征融合的SSVEP频率识别深度学习网络。","authors":"Yiwei Dai, Zhengkui Chen, Tian-Ao Cao, Hongyou Zhou, Min Fang, Yanyun Dai, Lurong Jiang, Jijun Tong","doi":"10.3389/fnins.2025.1679451","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Steady-state visual evoked potential (SSVEP) has emerged as a pivotal branch in brain-computer interfaces (BCIs) due to its high signal-to-noise ratio (SNR) and elevated information transfer rate (ITR). However, substantial inter-subject variability in electroencephalographic (EEG) signals poses a significant challenge to current SSVEP frequency recognition. In particular, it is difficult to achieve high cross-subject classification accuracy in calibration-free scenarios, and the classification performance heavily depends on extensive calibration data.</p><p><strong>Methods: </strong>To mitigate the reliance on large calibration datasets and enhance cross-subject generalization, we propose SSVEP time-frequency fusion network (SSVEP-TFFNet), an improved deep learning network fusing time-domain and frequency-domain features dynamically. The network comprises two parallel branches: a time-domain branch that ingests raw EEG signals and a frequency-domain branch that processes complex-spectrum features. The two branches extract the time-domain and frequency-domain features, respectively. Subsequently, these features are fused via a dynamic weighting mechanism and input to the classifier. This fusion strategy strengthens the feature expression ability and generalization across different subjects.</p><p><strong>Results: </strong>Cross-subject classification was conducted on publicly available 12-class and 40-class SSVEP datasets. We also compared SSVEP-TFFNet with traditional approaches and principal deep learning methods. Results demonstrate that SSVEP-TFFNet achieves an average classification accuracy of 89.72% on the 12-class dataset, surpassing the best baseline method by 1.83%. SSVEP-TFFNet achieves average classification accuracies of 72.11 and 82.50% (40-class datasets), outperforming the best controlled method by 7.40 and 6.89% separately.</p><p><strong>Discussion: </strong>The performance validates the efficacy of dynamic time-frequency feature fusion and our proposed method provides a new paradigm for calibration-free SSVEP-based BCI systems.</p>","PeriodicalId":12639,"journal":{"name":"Frontiers in Neuroscience","volume":"19 ","pages":"1679451"},"PeriodicalIF":3.2000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12515880/pdf/","citationCount":"0","resultStr":"{\"title\":\"A time-frequency feature fusion-based deep learning network for SSVEP frequency recognition.\",\"authors\":\"Yiwei Dai, Zhengkui Chen, Tian-Ao Cao, Hongyou Zhou, Min Fang, Yanyun Dai, Lurong Jiang, Jijun Tong\",\"doi\":\"10.3389/fnins.2025.1679451\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Steady-state visual evoked potential (SSVEP) has emerged as a pivotal branch in brain-computer interfaces (BCIs) due to its high signal-to-noise ratio (SNR) and elevated information transfer rate (ITR). However, substantial inter-subject variability in electroencephalographic (EEG) signals poses a significant challenge to current SSVEP frequency recognition. In particular, it is difficult to achieve high cross-subject classification accuracy in calibration-free scenarios, and the classification performance heavily depends on extensive calibration data.</p><p><strong>Methods: </strong>To mitigate the reliance on large calibration datasets and enhance cross-subject generalization, we propose SSVEP time-frequency fusion network (SSVEP-TFFNet), an improved deep learning network fusing time-domain and frequency-domain features dynamically. The network comprises two parallel branches: a time-domain branch that ingests raw EEG signals and a frequency-domain branch that processes complex-spectrum features. The two branches extract the time-domain and frequency-domain features, respectively. Subsequently, these features are fused via a dynamic weighting mechanism and input to the classifier. This fusion strategy strengthens the feature expression ability and generalization across different subjects.</p><p><strong>Results: </strong>Cross-subject classification was conducted on publicly available 12-class and 40-class SSVEP datasets. We also compared SSVEP-TFFNet with traditional approaches and principal deep learning methods. Results demonstrate that SSVEP-TFFNet achieves an average classification accuracy of 89.72% on the 12-class dataset, surpassing the best baseline method by 1.83%. SSVEP-TFFNet achieves average classification accuracies of 72.11 and 82.50% (40-class datasets), outperforming the best controlled method by 7.40 and 6.89% separately.</p><p><strong>Discussion: </strong>The performance validates the efficacy of dynamic time-frequency feature fusion and our proposed method provides a new paradigm for calibration-free SSVEP-based BCI systems.</p>\",\"PeriodicalId\":12639,\"journal\":{\"name\":\"Frontiers in Neuroscience\",\"volume\":\"19 \",\"pages\":\"1679451\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12515880/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3389/fnins.2025.1679451\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fnins.2025.1679451","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

稳态视觉诱发电位(SSVEP)因其高信噪比(SNR)和高信息传输速率(ITR)而成为脑机接口(bci)的一个关键分支。然而,脑电图(EEG)信号的主体间变异性对当前的SSVEP频率识别提出了重大挑战。特别是在无标定的情况下,很难达到较高的跨学科分类精度,分类性能严重依赖于大量的标定数据。方法:为了减轻对大型校准数据集的依赖,增强跨学科泛化能力,我们提出了一种改进的SSVEP时频融合网络(SSVEP- tffnet),这是一种动态融合时域和频域特征的深度学习网络。该网络包括两个并行分支:摄取原始脑电图信号的时域分支和处理复杂频谱特征的频域分支。这两个分支分别提取时域和频域特征。随后,这些特征通过动态加权机制融合并输入到分类器中。这种融合策略增强了特征的表达能力和跨主题的泛化能力。结果:对公开的12类和40类SSVEP数据集进行了跨学科分类。我们还将SSVEP-TFFNet与传统方法和主要深度学习方法进行了比较。结果表明,SSVEP-TFFNet在12类数据集上的平均分类准确率为89.72%,比最佳基线方法高出1.83%。SSVEP-TFFNet在40类数据集上的平均分类准确率分别为72.11%和82.50%,分别比最佳控制方法高7.40%和6.89%。讨论:性能验证了动态时频特征融合的有效性,我们提出的方法为基于ssvep的无校准BCI系统提供了新的范例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A time-frequency feature fusion-based deep learning network for SSVEP frequency recognition.

Introduction: Steady-state visual evoked potential (SSVEP) has emerged as a pivotal branch in brain-computer interfaces (BCIs) due to its high signal-to-noise ratio (SNR) and elevated information transfer rate (ITR). However, substantial inter-subject variability in electroencephalographic (EEG) signals poses a significant challenge to current SSVEP frequency recognition. In particular, it is difficult to achieve high cross-subject classification accuracy in calibration-free scenarios, and the classification performance heavily depends on extensive calibration data.

Methods: To mitigate the reliance on large calibration datasets and enhance cross-subject generalization, we propose SSVEP time-frequency fusion network (SSVEP-TFFNet), an improved deep learning network fusing time-domain and frequency-domain features dynamically. The network comprises two parallel branches: a time-domain branch that ingests raw EEG signals and a frequency-domain branch that processes complex-spectrum features. The two branches extract the time-domain and frequency-domain features, respectively. Subsequently, these features are fused via a dynamic weighting mechanism and input to the classifier. This fusion strategy strengthens the feature expression ability and generalization across different subjects.

Results: Cross-subject classification was conducted on publicly available 12-class and 40-class SSVEP datasets. We also compared SSVEP-TFFNet with traditional approaches and principal deep learning methods. Results demonstrate that SSVEP-TFFNet achieves an average classification accuracy of 89.72% on the 12-class dataset, surpassing the best baseline method by 1.83%. SSVEP-TFFNet achieves average classification accuracies of 72.11 and 82.50% (40-class datasets), outperforming the best controlled method by 7.40 and 6.89% separately.

Discussion: The performance validates the efficacy of dynamic time-frequency feature fusion and our proposed method provides a new paradigm for calibration-free SSVEP-based BCI systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Neuroscience
Frontiers in Neuroscience NEUROSCIENCES-
CiteScore
6.20
自引率
4.70%
发文量
2070
审稿时长
14 weeks
期刊介绍: Neural Technology is devoted to the convergence between neurobiology and quantum-, nano- and micro-sciences. In our vision, this interdisciplinary approach should go beyond the technological development of sophisticated methods and should contribute in generating a genuine change in our discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信