通过最小化基于事件的预测误差来学习再现视觉上相似的动作

Jacques Kaiser, Svenja Melbaum, J. C. V. Tieck, A. Rönnau, Martin Volker Butz, R. Dillmann
{"title":"通过最小化基于事件的预测误差来学习再现视觉上相似的动作","authors":"Jacques Kaiser, Svenja Melbaum, J. C. V. Tieck, A. Rönnau, Martin Volker Butz, R. Dillmann","doi":"10.1109/BIOROB.2018.8487959","DOIUrl":null,"url":null,"abstract":"Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.","PeriodicalId":382522,"journal":{"name":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Learning to Reproduce Visually Similar Movements by Minimizing Event-Based Prediction Error\",\"authors\":\"Jacques Kaiser, Svenja Melbaum, J. C. V. Tieck, A. Rönnau, Martin Volker Butz, R. Dillmann\",\"doi\":\"10.1109/BIOROB.2018.8487959\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.\",\"PeriodicalId\":382522,\"journal\":{\"name\":\"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)\",\"volume\":\"02 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BIOROB.2018.8487959\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIOROB.2018.8487959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

摘要

预测被认为在人类大脑中扮演着重要的角色。然而,预测在学习新动作的过程中是如何发挥作用的,目前还不清楚。本文提出了一种从视觉预测中学习动作的方法。该方法包括两个阶段:学习给定运动的视觉预测模型,然后最小化视觉预测误差。视觉预测模型是从单一的运动演示中学习的,其中只有视觉输入被感知。与以前的工作不同,我们使用动态视觉传感器提供的事件流来表示视觉信息。这允许我们只处理环境中的变化,而不是使用脉冲神经网络来处理完整的快照。通过最小化预测误差,学习视觉上与演示相似的动作。我们通过在不同的模拟机器人上学习人类演示的简单动作来评估我们的方法。我们证明了视觉预测误差的定义极大地影响了我们的方法所学习的动作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning to Reproduce Visually Similar Movements by Minimizing Event-Based Prediction Error
Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信