{"title":"将通用模型参数从啁啾调制转移到稳态视觉诱发电位用于校准高效脑机接口","authors":"Bang Xiong;Bo Wan;Jiayang Huang;Pengfei Yang","doi":"10.1109/TASE.2025.3587765","DOIUrl":null,"url":null,"abstract":"While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of <inline-formula> <tex-math>$156.39~\\pm ~37.11$ </tex-math></inline-formula> bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: <inline-formula> <tex-math>$156.30~\\pm ~44.81$ </tex-math></inline-formula> bits/min with 50 s calibration time, CIR: <inline-formula> <tex-math>$155.39~\\pm ~36.89$ </tex-math></inline-formula> bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at <uri>https://github.com/xxxb/CTSS</uri> Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"18443-18457"},"PeriodicalIF":6.4000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transferring Common Model Parameters From Chirp-Modulated to Steady-State Visual Evoked Potentials for Calibration-Efficient BCIs\",\"authors\":\"Bang Xiong;Bo Wan;Jiayang Huang;Pengfei Yang\",\"doi\":\"10.1109/TASE.2025.3587765\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of <inline-formula> <tex-math>$156.39~\\\\pm ~37.11$ </tex-math></inline-formula> bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: <inline-formula> <tex-math>$156.30~\\\\pm ~44.81$ </tex-math></inline-formula> bits/min with 50 s calibration time, CIR: <inline-formula> <tex-math>$155.39~\\\\pm ~36.89$ </tex-math></inline-formula> bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at <uri>https://github.com/xxxb/CTSS</uri> Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"18443-18457\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11077380/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11077380/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Transferring Common Model Parameters From Chirp-Modulated to Steady-State Visual Evoked Potentials for Calibration-Efficient BCIs
While calibration improves the information transfer rate (ITR) of steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), the calibration process corresponding to all stimuli remains time-consuming and labor-intensive. Transfer learning has been applied to reduce the requirement for calibration data. However, a considerable amount of source domain stimulus data is still required to train the transferable model parameters. To further reduce the calibration time, this study proposes a cross-stimulus transfer learning method that transfers the common impulse response and spatial filter from a single chirp-modulated visual evoked potential (Chirp-VEP) To SSVEPs (CTSS). First, a hypothesis about the common impulse response in Chirp-VEP is made, and the corresponding chirp impulse is constructed. Then, based on the linear superposition theory and the common spatial filter, a least-squares problem is formulated between the Chirp-VEP and the reconstructed Chirp-VEP template to learn the common impulse response and spatial filter. Finally, these common model parameters are transferred to SSVEP-based BCIs for target identification. For performance evaluation, both offline and online experiments were conducted on the custom-developed Chirp-based and SSVEP-based BCI systems. Experimental results showed that Chirp-VEP and SSVEPs exhibited similar impulse responses and spatial filters. In online spelling tasks, the proposed method achieved an information transfer rate (ITR) of $156.39~\pm ~37.11$ bits/min with a calibration time of 4.5 s to achieve comparable performance to state-of-the-art (SOTA) methods (tlCCA: $156.30~\pm ~44.81$ bits/min with 50 s calibration time, CIR: $155.39~\pm ~36.89$ bits/min with 20 s calibration time). These results demonstrate that the proposed method facilitates the practical implementation of SSVEP-based BCIs in real-world applications. The source code and dataset are available at https://github.com/xxxb/CTSS Note to Practitioners—This study is motivated by the lack of efficient calibration methods in multi-command steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs), which are widely used in home automation and clinical rehabilitation scenarios. The conventional use of SSVEP signals for calibration leads to low bandwidth utilization and prolonged calibration time. In this study, we propose a novel calibration method that transfers common model parameters from a single Chirp-modulated visual evoked potential (Chirp-VEP) signal to multi-command SSVEP-based BCIs. The multi-frequency nature of the Chirp-VEP signal enables the extraction of shared features between Chirp-VEP and SSVEPs, thereby reducing the overall calibration time. The results of the experiments showed that the proposed calibration method achieved comparable classification accuracy to state-of-the-art methods with minimal calibration time, ultimately contributing to the development of user-friendly and efficient EEG-based control systems. Future work will explore the integration of Chirp-VEPs with various visual evoked potential (VEP) paradigms to support the development of diverse BCI applications, enhancing their practicality and accessibility in real-world settings.
期刊介绍:
The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.