{"title":"鲁棒和自适应灵巧操作与基于视觉的学习从多个演示","authors":"Nuo Chen;Lucas Wan;Ya-Jun Pan","doi":"10.1109/TIE.2024.3503610","DOIUrl":null,"url":null,"abstract":"In this article, we propose a vision-based learning-from-demonstration framework for a seven-degree-of-freedom (7-DOF) robotic manipulator. This framework enables learning from multiple contact-free human-hand demonstrations to execute dexterous pick-and-place tasks. Conventional methods for collecting demonstration data involve manually and physically moving the robot. These methods can be cumbersome, lack dexterity, and be physically straining. We leverage MediaPipe software, dynamic time warping (DTW), and Gaussian mixture model/regression to capture and regress multiple dexterous and marker-less hand motions. The proposed approach results in a more comprehensive motion representation, simplifying multiple demonstrations, and mitigating the non-smoothness inherent in single demonstrations. A novel dynamic movement primitives (DMP) with a variance-based force coupling term are developed to adaptively assimilate human actions into trajectories executable in dynamic environments. By considering the estimated variance from demonstration data, the DMP parameters are automatically fine-tuned and associated with the nonlinear terms to adapt the trajectories. To compensate for unknown external disturbances, non-singular terminal sliding mode (NTSM) control is applied for precise trajectory tracking. Experimental studies demonstrate the performance and robustness of our framework in executing demonstrations, motion planning, and control for a pick-and-place task.","PeriodicalId":13402,"journal":{"name":"IEEE Transactions on Industrial Electronics","volume":"72 6","pages":"6465-6473"},"PeriodicalIF":7.2000,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust and Adaptive Dexterous Manipulation With Vision-Based Learning From Multiple Demonstrations\",\"authors\":\"Nuo Chen;Lucas Wan;Ya-Jun Pan\",\"doi\":\"10.1109/TIE.2024.3503610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we propose a vision-based learning-from-demonstration framework for a seven-degree-of-freedom (7-DOF) robotic manipulator. This framework enables learning from multiple contact-free human-hand demonstrations to execute dexterous pick-and-place tasks. Conventional methods for collecting demonstration data involve manually and physically moving the robot. These methods can be cumbersome, lack dexterity, and be physically straining. We leverage MediaPipe software, dynamic time warping (DTW), and Gaussian mixture model/regression to capture and regress multiple dexterous and marker-less hand motions. The proposed approach results in a more comprehensive motion representation, simplifying multiple demonstrations, and mitigating the non-smoothness inherent in single demonstrations. A novel dynamic movement primitives (DMP) with a variance-based force coupling term are developed to adaptively assimilate human actions into trajectories executable in dynamic environments. By considering the estimated variance from demonstration data, the DMP parameters are automatically fine-tuned and associated with the nonlinear terms to adapt the trajectories. To compensate for unknown external disturbances, non-singular terminal sliding mode (NTSM) control is applied for precise trajectory tracking. Experimental studies demonstrate the performance and robustness of our framework in executing demonstrations, motion planning, and control for a pick-and-place task.\",\"PeriodicalId\":13402,\"journal\":{\"name\":\"IEEE Transactions on Industrial Electronics\",\"volume\":\"72 6\",\"pages\":\"6465-6473\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Industrial Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10774182/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Industrial Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10774182/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Robust and Adaptive Dexterous Manipulation With Vision-Based Learning From Multiple Demonstrations
In this article, we propose a vision-based learning-from-demonstration framework for a seven-degree-of-freedom (7-DOF) robotic manipulator. This framework enables learning from multiple contact-free human-hand demonstrations to execute dexterous pick-and-place tasks. Conventional methods for collecting demonstration data involve manually and physically moving the robot. These methods can be cumbersome, lack dexterity, and be physically straining. We leverage MediaPipe software, dynamic time warping (DTW), and Gaussian mixture model/regression to capture and regress multiple dexterous and marker-less hand motions. The proposed approach results in a more comprehensive motion representation, simplifying multiple demonstrations, and mitigating the non-smoothness inherent in single demonstrations. A novel dynamic movement primitives (DMP) with a variance-based force coupling term are developed to adaptively assimilate human actions into trajectories executable in dynamic environments. By considering the estimated variance from demonstration data, the DMP parameters are automatically fine-tuned and associated with the nonlinear terms to adapt the trajectories. To compensate for unknown external disturbances, non-singular terminal sliding mode (NTSM) control is applied for precise trajectory tracking. Experimental studies demonstrate the performance and robustness of our framework in executing demonstrations, motion planning, and control for a pick-and-place task.
期刊介绍:
Journal Name: IEEE Transactions on Industrial Electronics
Publication Frequency: Monthly
Scope:
The scope of IEEE Transactions on Industrial Electronics encompasses the following areas:
Applications of electronics, controls, and communications in industrial and manufacturing systems and processes.
Power electronics and drive control techniques.
System control and signal processing.
Fault detection and diagnosis.
Power systems.
Instrumentation, measurement, and testing.
Modeling and simulation.
Motion control.
Robotics.
Sensors and actuators.
Implementation of neural networks, fuzzy logic, and artificial intelligence in industrial systems.
Factory automation.
Communication and computer networks.