Reducing calibration efforts of SSVEP-BCIs by shallow fine-tuning-based transfer learning.

IF 3.1 3区 工程技术 Q2 NEUROSCIENCES
Cognitive Neurodynamics Pub Date : 2025-12-01 Epub Date: 2025-05-26 DOI:10.1007/s11571-025-10264-8
Wenlong Ding, Aiping Liu, Xingui Chen, Chengjuan Xie, Kai Wang, Xun Chen
{"title":"Reducing calibration efforts of SSVEP-BCIs by shallow fine-tuning-based transfer learning.","authors":"Wenlong Ding, Aiping Liu, Xingui Chen, Chengjuan Xie, Kai Wang, Xun Chen","doi":"10.1007/s11571-025-10264-8","DOIUrl":null,"url":null,"abstract":"<p><p>The utilization of transfer learning (TL), particularly through pre-training and fine-tuning, in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) has substantially reduced the calibration efforts. However, commonly employed fine-tuning approaches, including end-to-end fine-tuning and last-layer fine-tuning, require data from target subjects that encompass all categories (stimuli), resulting in a time-consuming data collection process, especially in systems with numerous categories. To address this challenge, this study introduces a straightforward yet effective ShallOw Fine-Tuning (SOFT) method to substantially reduce the number of calibration categories needed for model fine-tuning, thereby further mitigating the calibration efforts for target subjects. Specifically, SOFT involves freezing the parameters of the deeper layers while updating those of the shallow layers during fine-tuning. Freezing the parameters of deeper layers preserves the model's ability to recognize semantic and high-level features across all categories, as established during pre-training. Moreover, data from different categories exhibit similar individual-specific low-level features in SSVEP-BCIs. Consequently, updating the parameters of shallow layers-responsible for processing low-level features-with data solely from partial categories enables the fine-tuned model to efficiently capture the individual-related features shared by all categories. The effectiveness of SOFT is validated using two public datasets. Comparative analysis with commonly used end-to-end and last-layer fine-tuning methods reveals that SOFT achieves higher classification accuracy while requiring fewer calibration categories. The proposed SOFT method further decreases the calibration efforts for target subjects by reducing the calibration category requirements, thereby improving the feasibility of SSVEP-BCIs for real-world applications.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"81"},"PeriodicalIF":3.1000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106289/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-025-10264-8","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/26 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The utilization of transfer learning (TL), particularly through pre-training and fine-tuning, in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) has substantially reduced the calibration efforts. However, commonly employed fine-tuning approaches, including end-to-end fine-tuning and last-layer fine-tuning, require data from target subjects that encompass all categories (stimuli), resulting in a time-consuming data collection process, especially in systems with numerous categories. To address this challenge, this study introduces a straightforward yet effective ShallOw Fine-Tuning (SOFT) method to substantially reduce the number of calibration categories needed for model fine-tuning, thereby further mitigating the calibration efforts for target subjects. Specifically, SOFT involves freezing the parameters of the deeper layers while updating those of the shallow layers during fine-tuning. Freezing the parameters of deeper layers preserves the model's ability to recognize semantic and high-level features across all categories, as established during pre-training. Moreover, data from different categories exhibit similar individual-specific low-level features in SSVEP-BCIs. Consequently, updating the parameters of shallow layers-responsible for processing low-level features-with data solely from partial categories enables the fine-tuned model to efficiently capture the individual-related features shared by all categories. The effectiveness of SOFT is validated using two public datasets. Comparative analysis with commonly used end-to-end and last-layer fine-tuning methods reveals that SOFT achieves higher classification accuracy while requiring fewer calibration categories. The proposed SOFT method further decreases the calibration efforts for target subjects by reducing the calibration category requirements, thereby improving the feasibility of SSVEP-BCIs for real-world applications.

基于浅微调的迁移学习减少ssvep - bci的校准工作量。
在基于稳态视觉诱发电位(SSVEP)的脑机接口(bci)中使用迁移学习(TL),特别是通过预训练和微调,大大减少了校准工作。然而,常用的微调方法,包括端到端微调和最后一层微调,需要来自包含所有类别(刺激)的目标对象的数据,这导致了一个耗时的数据收集过程,特别是在具有众多类别的系统中。为了解决这一挑战,本研究引入了一种简单有效的浅微调(SOFT)方法,大大减少了模型微调所需的校准类别数量,从而进一步减轻了目标受试者的校准工作量。具体来说,SOFT涉及在微调期间冻结较深层的参数,同时更新较浅层的参数。冻结更深层的参数保留了模型在所有类别中识别语义和高级特征的能力,就像在预训练期间建立的那样。此外,来自不同类别的数据在ssvep - bci中显示出相似的个体特异性低水平特征。因此,仅用部分类别的数据更新负责处理低级特征的浅层的参数,使微调模型能够有效地捕获所有类别共享的与个体相关的特征。使用两个公共数据集验证了SOFT的有效性。与常用的端到端和最后一层微调方法的对比分析表明,SOFT在需要较少的校准类别的情况下获得了更高的分类精度。所提出的SOFT方法通过降低校准类别要求,进一步减少了对目标受试者的校准工作量,从而提高了ssvep - bci在实际应用中的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Neurodynamics
Cognitive Neurodynamics 医学-神经科学
CiteScore
6.90
自引率
18.90%
发文量
140
审稿时长
12 months
期刊介绍: Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models. The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome. The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged. 1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics. 2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages. 3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信