将通用模型参数从啁啾调制转移到稳态视觉诱发电位用于校准高效脑机接口

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Bang Xiong;Bo Wan;Jiayang Huang;Pengfei Yang
{"title":"将通用模型参数从啁啾调制转移到稳态视觉诱发电位用于校准高效脑机接口","authors":"Bang Xiong;Bo Wan;Jiayang Huang;Pengfei Yang","doi":"10.1109/TASE.2025.3587765","DOIUrl":null,"url":null,"abstract":"While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of <inline-formula> <tex-math>$156.39~\\pm ~37.11$ </tex-math></inline-formula> bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: <inline-formula> <tex-math>$156.30~\\pm ~44.81$ </tex-math></inline-formula> bits/min with 50 s calibration time, CIR: <inline-formula> <tex-math>$155.39~\\pm ~36.89$ </tex-math></inline-formula> bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at <uri>https://github.com/xxxb/CTSS</uri> Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"18443-18457"},"PeriodicalIF":6.4000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transferring Common Model Parameters From Chirp-Modulated to Steady-State Visual Evoked Potentials for Calibration-Efficient BCIs\",\"authors\":\"Bang Xiong;Bo Wan;Jiayang Huang;Pengfei Yang\",\"doi\":\"10.1109/TASE.2025.3587765\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of <inline-formula> <tex-math>$156.39~\\\\pm ~37.11$ </tex-math></inline-formula> bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: <inline-formula> <tex-math>$156.30~\\\\pm ~44.81$ </tex-math></inline-formula> bits/min with 50 s calibration time, CIR: <inline-formula> <tex-math>$155.39~\\\\pm ~36.89$ </tex-math></inline-formula> bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at <uri>https://github.com/xxxb/CTSS</uri> Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"18443-18457\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11077380/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11077380/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

虽然标定提高了基于稳态视觉诱发电位(SSVEP)的脑机接口(bci)的信息传输速率(ITR),但所有刺激对应的标定过程仍然是耗时且费力的。采用迁移学习方法减少了对标定数据的需求。然而,仍然需要大量的源域刺激数据来训练可转移的模型参数。为了进一步缩短校准时间,本研究提出了一种交叉刺激迁移学习方法,将共同脉冲响应和空间滤波器从单个啁啾调制视觉诱发电位(Chirp-VEP)迁移到ssvep (CTSS)。首先,对chirp - vep中常见的脉冲响应进行假设,并构造相应的啁啾脉冲。然后,基于线性叠加理论和公共空间滤波器,建立了Chirp-VEP与重构的Chirp-VEP模板之间的最小二乘问题,学习了共同的脉冲响应和空间滤波器。最后,将这些常用的模型参数传递到基于ssvep的bci中进行目标识别。为了进行性能评估,对定制开发的基于chirp和基于ssvep的脑机接口系统进行了离线和在线实验。实验结果表明,Chirp-VEP和ssvep具有相似的脉冲响应和空间滤波特性。在在线拼写任务中,该方法的信息传输速率(ITR)为$156.39~\pm ~37.11$ bits/min,校准时间为4.5 s,与最先进的(SOTA)方法(tlCCA: $156.30~\pm ~44.81$ bits/min,校准时间为50 s, CIR: $155.39~\pm ~36.89$ bits/min,校准时间为20 s)相当。这些结果表明,所提出的方法有助于在实际应用中实现基于ssvep的bci。本研究的动机是基于多指令稳态视觉诱发电位(SSVEP)的脑机接口(bci)缺乏有效的校准方法,而bci广泛应用于家庭自动化和临床康复场景。传统使用SSVEP信号进行校准导致带宽利用率低,校准时间长。在这项研究中,我们提出了一种新的校准方法,将常见的模型参数从单个啁啾调制视觉诱发电位(Chirp-VEP)信号转移到基于多命令ssvep的脑机接口。Chirp-VEP信号的多频特性可以提取Chirp-VEP和ssvep之间的共享特征,从而减少整体校准时间。实验结果表明,所提出的校准方法在最小的校准时间内实现了与最先进方法相当的分类精度,最终有助于开发用户友好且高效的基于脑电图的控制系统。未来的工作将探索chirp -VEP与各种视觉诱发电位(VEP)范式的集成,以支持各种脑机接口应用的开发,增强其在现实环境中的实用性和可及性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Transferring Common Model Parameters From Chirp-Modulated to Steady-State Visual Evoked Potentials for Calibration-Efficient BCIs
While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of $156.39~\pm ~37.11$ bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: $156.30~\pm ~44.81$ bits/min with 50 s calibration time, CIR: $155.39~\pm ~36.89$ bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at https://github.com/xxxb/CTSS Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信