Siyu Liu , Mengzhen Liu , Zhiyuan Ming , Yilun Huang , Lingfei Ma , Deyu Zhang , Yifan Song , Jian Zhang , Tianyi Yan
{"title":"Copilot: A framework for integrating LLM and BMI to enhance human–robot interaction","authors":"Siyu Liu , Mengzhen Liu , Zhiyuan Ming , Yilun Huang , Lingfei Ma , Deyu Zhang , Yifan Song , Jian Zhang , Tianyi Yan","doi":"10.1016/j.rcim.2026.103291","DOIUrl":null,"url":null,"abstract":"<div><div>This paper proposes an innovative human–robot interaction (HRI) framework called Copilot, which aims to bridge the gap between human intent and robot intelligence. Currently, existing HRI systems struggle to infer human intentions and rely heavily on predefined rules, a limitation that significantly hinders the advancement of the field. To address this issue, the Copilot framework, for the first time, integrates the environmental understanding capabilities of large language models (LLMs) with the intention recognition advantages of brain-machine interface (BMI). It constructs three core modules: (1) a LLM-based visual evoked potential (LLM-VEP) paradigm module utilizing LLM for scene understanding and dynamic marking; (2) a BMI module employing the blink-triggered multivariate variational mode decomposition with canonical correlation analysis (BT-MVMD-CCA) algorithm; and (3) an intelligent agent flexibly adapting to different task requirements. Through online experimental validation with 12 participants, the system performed optimally when using the EEG-based double blink triggering (EEG-DBT) method: 0% false trigger rate, 94.09% blink detection rate, and 84.00% task completion rate. In offline experiments, the proposed BT-MVMD-CCA algorithm achieved 92.3% classification accuracy and a peak information transfer rate (ITR) of 71.1 bits/min at DTW = 1.5 s. This research not only provides theoretical support for the HRI field, but also offers promising solutions for assistive robotics and manufacturing scenarios.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"101 ","pages":"Article 103291"},"PeriodicalIF":11.4000,"publicationDate":"2026-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Computer-integrated Manufacturing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0736584526000700","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/3/9 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes an innovative human–robot interaction (HRI) framework called Copilot, which aims to bridge the gap between human intent and robot intelligence. Currently, existing HRI systems struggle to infer human intentions and rely heavily on predefined rules, a limitation that significantly hinders the advancement of the field. To address this issue, the Copilot framework, for the first time, integrates the environmental understanding capabilities of large language models (LLMs) with the intention recognition advantages of brain-machine interface (BMI). It constructs three core modules: (1) a LLM-based visual evoked potential (LLM-VEP) paradigm module utilizing LLM for scene understanding and dynamic marking; (2) a BMI module employing the blink-triggered multivariate variational mode decomposition with canonical correlation analysis (BT-MVMD-CCA) algorithm; and (3) an intelligent agent flexibly adapting to different task requirements. Through online experimental validation with 12 participants, the system performed optimally when using the EEG-based double blink triggering (EEG-DBT) method: 0% false trigger rate, 94.09% blink detection rate, and 84.00% task completion rate. In offline experiments, the proposed BT-MVMD-CCA algorithm achieved 92.3% classification accuracy and a peak information transfer rate (ITR) of 71.1 bits/min at DTW = 1.5 s. This research not only provides theoretical support for the HRI field, but also offers promising solutions for assistive robotics and manufacturing scenarios.
期刊介绍:
The journal, Robotics and Computer-Integrated Manufacturing, focuses on sharing research applications that contribute to the development of new or enhanced robotics, manufacturing technologies, and innovative manufacturing strategies that are relevant to industry. Papers that combine theory and experimental validation are preferred, while review papers on current robotics and manufacturing issues are also considered. However, papers on traditional machining processes, modeling and simulation, supply chain management, and resource optimization are generally not within the scope of the journal, as there are more appropriate journals for these topics. Similarly, papers that are overly theoretical or mathematical will be directed to other suitable journals. The journal welcomes original papers in areas such as industrial robotics, human-robot collaboration in manufacturing, cloud-based manufacturing, cyber-physical production systems, big data analytics in manufacturing, smart mechatronics, machine learning, adaptive and sustainable manufacturing, and other fields involving unique manufacturing technologies.