Jacques Kaiser, Svenja Melbaum, J. C. V. Tieck, A. Rönnau, Martin Volker Butz, R. Dillmann
{"title":"Learning to Reproduce Visually Similar Movements by Minimizing Event-Based Prediction Error","authors":"Jacques Kaiser, Svenja Melbaum, J. C. V. Tieck, A. Rönnau, Martin Volker Butz, R. Dillmann","doi":"10.1109/BIOROB.2018.8487959","DOIUrl":null,"url":null,"abstract":"Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.","PeriodicalId":382522,"journal":{"name":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIOROB.2018.8487959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.