人机内在技能传递与演示系统编程

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Fei Wang;Jinxiu Wu;Siyi Lian;Yi Guo;Kaiyin Hu
{"title":"人机内在技能传递与演示系统编程","authors":"Fei Wang;Jinxiu Wu;Siyi Lian;Yi Guo;Kaiyin Hu","doi":"10.1109/TASE.2025.3559661","DOIUrl":null,"url":null,"abstract":"Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"14498-14509"},"PeriodicalIF":6.4000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human–Robot Intrinsic Skill Transfer and Programming by Demonstration System\",\"authors\":\"Fei Wang;Jinxiu Wu;Siyi Lian;Yi Guo;Kaiyin Hu\",\"doi\":\"10.1109/TASE.2025.3559661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"14498-14509\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10962176/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10962176/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

机器人通常可以从人类的示范中学习技能。机器人通过视觉感知进行的操作通常受外界环境的影响,对外界条件要求较高,人工分割任务耗时费力,机器人在执行复杂任务时动作不够准确和自然。在这项工作中,我们提出了一个编程演示框架,以促进自主任务分割和灵活执行。我们从前臂获取表面肌电信号,并通过手语数据集的迁移学习来训练手势数据集,从而实现动作分类。同时,采集前臂的惯性信息与表面肌电信号结合,自主地将操作技能分割成离散的任务单元,并与地面真实情况进行比较,证明多模态信息可以提高分割精度。为了使机器人的运动更加自然,我们在该系统中加入了手臂刚度信息,并通过创建演示者的肌肉力图来估计不同个体的手臂刚度。最后,将人的操作技能映射到UR5e机器人上,验证人机技能迁移的结果。从业人员注意事项:这项工作的动机是记录人类操作的特征,并将其转移到机器人中,以便更灵活、更安全的机器人编程,最终应用于结构化的工业场景。我们的研究与其他研究的不同之处在于:1)使用了一种以非耦合方式操作的便携式多源信号采集设备。2)从外部演示信息和内部力信息考虑演示编程过程。3)不同人得到的结果相同,说明该方法具有较好的泛化性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human–Robot Intrinsic Skill Transfer and Programming by Demonstration System
Robots can often learn skills from human demonstrations. Robot operations via visual perception are commonly influenced by the external environment and are relatively demanding in terms of external conditions, manual segmentation of tasks is time-consuming and labor-intensive, and robots do not perform complex tasks with sufficient accuracy and naturalness in their movements. In this work, we propose a programming by demonstration framework to facilitate autonomous task segmentation and flexible execution. We acquire surface electromyography (sEMG) signals from the forearm and train the gesture datasets by transfer learning from the sign language dataset to achieve action classification. Meanwhile, the inertial information of the forearm is collected and combined with the sEMG signals to autonomously segment operational skills into discrete task units, and after comparing with ground truth, it is demonstrated that multi-modal information can lead to higher segmentation accuracy. To make the robot movements more natural, we add arm stiffness information to this system and estimate the arm stiffness of different individuals by creating a muscle force map of the demonstrator. Finally, human manipulation skills are mapped onto the UR5e robot to validate the results of human-robot skill transfer. Note to Practitioners—The motivation of this work is to record the characteristics of human operations and transfer them to robots for more flexible and safer robot programming, with eventual application in structured industrial scenarios. Differences between our study and others include 1) The use of a portable multi-source signal acquisition device that operates in an uncoupled fashion. 2) Consider the demonstration programming process from the external demonstration information and the inner force information. 3) The results obtained by different people are the same, indicating that the method has good generalization.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信