{"title":"Learning by observation of robotic tasks using on-line PCA-based Eigen behavior","authors":"Xianhua Jiang, Y. Motai","doi":"10.1109/CIRA.2005.1554308","DOIUrl":null,"url":null,"abstract":"This paper presents a new framework for learning the behavior of an articulated body. The motion capturing method has been developed mainly for analysis of human movement, but very rarely used to teach a robot human behavior in an on-line manner. In the traditional teaching method, robotic motion is captured and converted into the virtual world, and then analyzed by human interaction with a graphical user interface. However such a supervised learning framework is often unrealistic since many real-life applications may involve huge datasets in which exhaustive sample-labeling requires expensive human resources. Thus in our learning phase, we initially apply the supervised learning to just small instances using a traditional principal component analysis (PCA) in the off-line phase, and then we apply the incremental PCA learning technique in the on-line phase. Our on-line PCA method maintains the reconstruction accuracy, and can add numerous new training instances while keeping reasonable dimensions of the eigenspace. In comparison to other incremental on-line learning approaches, which use each static image, our proposed method is new since we consider image sequences as a single unit of sensory data. The extensions of these methodologies include the robotic imitation of human behavior at the semantic level. The experimental results using a humanoid robot, demonstrate the feasibility and merits of this new approach for robotic teaching.","PeriodicalId":162553,"journal":{"name":"2005 International Symposium on Computational Intelligence in Robotics and Automation","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 International Symposium on Computational Intelligence in Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIRA.2005.1554308","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
This paper presents a new framework for learning the behavior of an articulated body. The motion capturing method has been developed mainly for analysis of human movement, but very rarely used to teach a robot human behavior in an on-line manner. In the traditional teaching method, robotic motion is captured and converted into the virtual world, and then analyzed by human interaction with a graphical user interface. However such a supervised learning framework is often unrealistic since many real-life applications may involve huge datasets in which exhaustive sample-labeling requires expensive human resources. Thus in our learning phase, we initially apply the supervised learning to just small instances using a traditional principal component analysis (PCA) in the off-line phase, and then we apply the incremental PCA learning technique in the on-line phase. Our on-line PCA method maintains the reconstruction accuracy, and can add numerous new training instances while keeping reasonable dimensions of the eigenspace. In comparison to other incremental on-line learning approaches, which use each static image, our proposed method is new since we consider image sequences as a single unit of sensory data. The extensions of these methodologies include the robotic imitation of human behavior at the semantic level. The experimental results using a humanoid robot, demonstrate the feasibility and merits of this new approach for robotic teaching.