{"title":"基于协作机器人演示的端到端动作模型学习","authors":"Andrea Maria Zanchettin","doi":"10.1016/j.robot.2025.105071","DOIUrl":null,"url":null,"abstract":"<div><div>Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Programming by Demonstration approach to efficiently configure and operate a collaborative robotics workstation. While motion profiles (i.e., the <em>how</em>) are taught through the commonly used lead-through programming method, the conditions to check before the execution of a motion and its impact on the environment (the <em>when</em> and <em>what</em>, respectively) are automatically derived using visual feedback. Differently from related works, the present methodology does not require a pre-compiled domain knowledge to encode the semantic characterisation of a demonstrated action (i.e., preconditions and effects). An industrially-relevant use-case, consisting in a collaborative robotics assembly application, is introduced to validate the approach. Results show high success rates in interpreting and solving user-defined tasks (i.e., goals) as well as the capability of the method to generalise well in situations never seen during the acquired demonstrations.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"193 ","pages":"Article 105071"},"PeriodicalIF":4.3000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"End-to-end action model learning from demonstration in collaborative robotics\",\"authors\":\"Andrea Maria Zanchettin\",\"doi\":\"10.1016/j.robot.2025.105071\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Programming by Demonstration approach to efficiently configure and operate a collaborative robotics workstation. While motion profiles (i.e., the <em>how</em>) are taught through the commonly used lead-through programming method, the conditions to check before the execution of a motion and its impact on the environment (the <em>when</em> and <em>what</em>, respectively) are automatically derived using visual feedback. Differently from related works, the present methodology does not require a pre-compiled domain knowledge to encode the semantic characterisation of a demonstrated action (i.e., preconditions and effects). An industrially-relevant use-case, consisting in a collaborative robotics assembly application, is introduced to validate the approach. Results show high success rates in interpreting and solving user-defined tasks (i.e., goals) as well as the capability of the method to generalise well in situations never seen during the acquired demonstrations.</div></div>\",\"PeriodicalId\":49592,\"journal\":{\"name\":\"Robotics and Autonomous Systems\",\"volume\":\"193 \",\"pages\":\"Article 105071\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics and Autonomous Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0921889025001575\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889025001575","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
End-to-end action model learning from demonstration in collaborative robotics
Access to advanced technology is crucial across all engineering disciplines. In the realm of industrial automation, collaborative robotics serves as a key solution, particularly for small or medium-sized enterprises facing frequent shifts in production demands. This paper introduces a Symbolic Programming by Demonstration approach to efficiently configure and operate a collaborative robotics workstation. While motion profiles (i.e., the how) are taught through the commonly used lead-through programming method, the conditions to check before the execution of a motion and its impact on the environment (the when and what, respectively) are automatically derived using visual feedback. Differently from related works, the present methodology does not require a pre-compiled domain knowledge to encode the semantic characterisation of a demonstrated action (i.e., preconditions and effects). An industrially-relevant use-case, consisting in a collaborative robotics assembly application, is introduced to validate the approach. Results show high success rates in interpreting and solving user-defined tasks (i.e., goals) as well as the capability of the method to generalise well in situations never seen during the acquired demonstrations.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.