{"title":"姿势和关节空间约束下的双手技能学习","authors":"João Silvério, S. Calinon, L. Rozo, D. Caldwell","doi":"10.1109/HUMANOIDS.2018.8624993","DOIUrl":null,"url":null,"abstract":"As humanoid robots become commonplace, learning and control algorithms must take into account the new challenges imposed by this morphology, in order to fully exploit their potential. One of the most prominent characteristics of such robots is their bimanual structure. Most research on learning bimanual skills has focused on the coordination between end-effectors, exploiting operational space formulations. However, motion patterns in bimanual scenarios are not exclusive to operational space, also occurring at joint level. Moreover, in addition to position, the end-effector orientation is also essential for bimanual operation. Here, we propose a framework for simultaneously learning constraints in configuration and operational spaces, while considering end-effector orientations, commonly overlooked in previous works. In particular, we extend the Task-Parameterized Gaussian Mixture Model (TP-GMM) with novel Jacobian-based operators that address the foregoing problem. The proposed framework is evaluated in a bimanual task with the COMAN humanoid that requires the consideration of operational and configuration space motions.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Bimanual Skill Learning with Pose and Joint Space Constraints\",\"authors\":\"João Silvério, S. Calinon, L. Rozo, D. Caldwell\",\"doi\":\"10.1109/HUMANOIDS.2018.8624993\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As humanoid robots become commonplace, learning and control algorithms must take into account the new challenges imposed by this morphology, in order to fully exploit their potential. One of the most prominent characteristics of such robots is their bimanual structure. Most research on learning bimanual skills has focused on the coordination between end-effectors, exploiting operational space formulations. However, motion patterns in bimanual scenarios are not exclusive to operational space, also occurring at joint level. Moreover, in addition to position, the end-effector orientation is also essential for bimanual operation. Here, we propose a framework for simultaneously learning constraints in configuration and operational spaces, while considering end-effector orientations, commonly overlooked in previous works. In particular, we extend the Task-Parameterized Gaussian Mixture Model (TP-GMM) with novel Jacobian-based operators that address the foregoing problem. The proposed framework is evaluated in a bimanual task with the COMAN humanoid that requires the consideration of operational and configuration space motions.\",\"PeriodicalId\":433345,\"journal\":{\"name\":\"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)\",\"volume\":\"204 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HUMANOIDS.2018.8624993\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HUMANOIDS.2018.8624993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bimanual Skill Learning with Pose and Joint Space Constraints
As humanoid robots become commonplace, learning and control algorithms must take into account the new challenges imposed by this morphology, in order to fully exploit their potential. One of the most prominent characteristics of such robots is their bimanual structure. Most research on learning bimanual skills has focused on the coordination between end-effectors, exploiting operational space formulations. However, motion patterns in bimanual scenarios are not exclusive to operational space, also occurring at joint level. Moreover, in addition to position, the end-effector orientation is also essential for bimanual operation. Here, we propose a framework for simultaneously learning constraints in configuration and operational spaces, while considering end-effector orientations, commonly overlooked in previous works. In particular, we extend the Task-Parameterized Gaussian Mixture Model (TP-GMM) with novel Jacobian-based operators that address the foregoing problem. The proposed framework is evaluated in a bimanual task with the COMAN humanoid that requires the consideration of operational and configuration space motions.