{"title":"人机内在技能传递与演示系统编程","authors":"Fei Wang;Jinxiu Wu;Siyi Lian;Yi Guo;Kaiyin Hu","doi":"10.1109/TASE.2025.3559661","DOIUrl":null,"url":null,"abstract":"Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"14498-14509"},"PeriodicalIF":6.4000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human–Robot Intrinsic Skill Transfer and Programming by Demonstration System\",\"authors\":\"Fei Wang;Jinxiu Wu;Siyi Lian;Yi Guo;Kaiyin Hu\",\"doi\":\"10.1109/TASE.2025.3559661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"14498-14509\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10962176/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10962176/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Human–Robot Intrinsic Skill Transfer and Programming by Demonstration System
Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.
期刊介绍:
The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.