{"title":"从人类演示中学习未指定对象的操作","authors":"K. Qian, Jun Xu, Ge Gao, Fang Fang, Xudong Ma","doi":"10.1109/ICARCV.2018.8581080","DOIUrl":null,"url":null,"abstract":"Learning by Demonstration (LbD) allows robots to acquire manipulation skills through human demonstration. In this regard, it is a challenging task to perceive spatial-temporal relations between sub-activities and object affordance in human demonstrations, especially when they are under-specified. This work extends the Probability Graph Model based methods to incorporate high-level demonstration classification. We propose an approach to model the semantics of human demonstration using Programming Domain Description Language (PDDL). Therefore, hidden motion primitives that are impossible to be learned directly from observing human demonstration in noisy video data can be inferred and the robot's plans are refined. Experimental results validate the effectiveness of the proposed method, in which more refined scripts can be generated for robot's execution.","PeriodicalId":395380,"journal":{"name":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Learning Under-Specified Object Manipulations from Human Demonstrations\",\"authors\":\"K. Qian, Jun Xu, Ge Gao, Fang Fang, Xudong Ma\",\"doi\":\"10.1109/ICARCV.2018.8581080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning by Demonstration (LbD) allows robots to acquire manipulation skills through human demonstration. In this regard, it is a challenging task to perceive spatial-temporal relations between sub-activities and object affordance in human demonstrations, especially when they are under-specified. This work extends the Probability Graph Model based methods to incorporate high-level demonstration classification. We propose an approach to model the semantics of human demonstration using Programming Domain Description Language (PDDL). Therefore, hidden motion primitives that are impossible to be learned directly from observing human demonstration in noisy video data can be inferred and the robot's plans are refined. Experimental results validate the effectiveness of the proposed method, in which more refined scripts can be generated for robot's execution.\",\"PeriodicalId\":395380,\"journal\":{\"name\":\"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICARCV.2018.8581080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2018.8581080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Under-Specified Object Manipulations from Human Demonstrations
Learning by Demonstration (LbD) allows robots to acquire manipulation skills through human demonstration. In this regard, it is a challenging task to perceive spatial-temporal relations between sub-activities and object affordance in human demonstrations, especially when they are under-specified. This work extends the Probability Graph Model based methods to incorporate high-level demonstration classification. We propose an approach to model the semantics of human demonstration using Programming Domain Description Language (PDDL). Therefore, hidden motion primitives that are impossible to be learned directly from observing human demonstration in noisy video data can be inferred and the robot's plans are refined. Experimental results validate the effectiveness of the proposed method, in which more refined scripts can be generated for robot's execution.