{"title":"Autonomous motion primitive segmentation of actions for incremental imitative learning of humanoid","authors":"Farhan Dawood, C. Loo","doi":"10.1109/RIISS.2014.7009169","DOIUrl":null,"url":null,"abstract":"During imitation learning or learning by demon-stration/observation, a crucial element of conception involves segmenting the continuous flow of motion into simpler units ÂĂŗ- motion primitives -ÂĂŗ by identifying the boundaries of an action. Secondly, in realistic environment the robot must be able to learn the observed motion patterns incrementally in a stable adaptive manner. In this paper, we propose an on-line and unsupervised motion segmentation method rendering the robot to learn actions by observing the patterns performed by other partner through Incremental Slow Feature Analysis. The segmentation model directly operates on the images acquired from the robot's vision sensor (camera) without requiring any kinematic model of the demonstrator. After segmentation, the spatio-temporal motion sequences are learned incrementally through Topological Gaussian Adaptive Resonance Hidden Markov Model. The learning model dynamically generates the topological structure in a self-organizing and self-stabilizing manner.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RIISS.2014.7009169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
During imitation learning or learning by demon-stration/observation, a crucial element of conception involves segmenting the continuous flow of motion into simpler units ÂĂŗ- motion primitives -ÂĂŗ by identifying the boundaries of an action. Secondly, in realistic environment the robot must be able to learn the observed motion patterns incrementally in a stable adaptive manner. In this paper, we propose an on-line and unsupervised motion segmentation method rendering the robot to learn actions by observing the patterns performed by other partner through Incremental Slow Feature Analysis. The segmentation model directly operates on the images acquired from the robot's vision sensor (camera) without requiring any kinematic model of the demonstrator. After segmentation, the spatio-temporal motion sequences are learned incrementally through Topological Gaussian Adaptive Resonance Hidden Markov Model. The learning model dynamically generates the topological structure in a self-organizing and self-stabilizing manner.