机器人代理感和物体持久性的深度卷积神经网络模型

Claus Lang, G. Schillaci, V. Hafner
{"title":"机器人代理感和物体持久性的深度卷积神经网络模型","authors":"Claus Lang, G. Schillaci, V. Hafner","doi":"10.1109/DEVLRN.2018.8761015","DOIUrl":null,"url":null,"abstract":"This work investigates the role of predictive models in the implementation of basic cognitive skills in robots, such as the capability to distinguish between self-generated actions and those generated by other individuals and the capability to maintain an enhanced internal visual representation of the world, where objects covered by the robot's own body in the original image may be visible in the enhanced one. A developmental approach is adopted for this purpose. In particular, a humanoid robot is learning, through a self-exploration behaviour, the sensory consequences (in the visual domain) of self-generated movements. The generated sensorimotor experience is used as training data for a deep convolutional neural network that maps proprioceptive and motor data (e.g. initial arm joint positions and applied motor commands) onto the visual consequences of these actions. This forward model is then used in two experiments. First, for generating visual predictions of self-generated movements, which are compared to actual visual perceptions and then used to compute a prediction error. This error is shown to be higher when there is an external subject performing actions, compared to situations where the robot is observing only itself. This supports the idea that prediction errors may serve as a cue for distinguishing between self and other, a fundamental prerequisite for the sense of agency. Secondly, we show how predictions can be used to attenuate self-generated movements, and thus create enhanced visual perceptions, where the sight of objects - originally occluded by the robot body - is still maintained. This may represent an important tool both for cognitive development in robots and for the understanding of the sense of object permanence in humans.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"A Deep Convolutional Neural Network Model for Sense of Agency and Object Permanence in Robots\",\"authors\":\"Claus Lang, G. Schillaci, V. Hafner\",\"doi\":\"10.1109/DEVLRN.2018.8761015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work investigates the role of predictive models in the implementation of basic cognitive skills in robots, such as the capability to distinguish between self-generated actions and those generated by other individuals and the capability to maintain an enhanced internal visual representation of the world, where objects covered by the robot's own body in the original image may be visible in the enhanced one. A developmental approach is adopted for this purpose. In particular, a humanoid robot is learning, through a self-exploration behaviour, the sensory consequences (in the visual domain) of self-generated movements. The generated sensorimotor experience is used as training data for a deep convolutional neural network that maps proprioceptive and motor data (e.g. initial arm joint positions and applied motor commands) onto the visual consequences of these actions. This forward model is then used in two experiments. First, for generating visual predictions of self-generated movements, which are compared to actual visual perceptions and then used to compute a prediction error. This error is shown to be higher when there is an external subject performing actions, compared to situations where the robot is observing only itself. This supports the idea that prediction errors may serve as a cue for distinguishing between self and other, a fundamental prerequisite for the sense of agency. Secondly, we show how predictions can be used to attenuate self-generated movements, and thus create enhanced visual perceptions, where the sight of objects - originally occluded by the robot body - is still maintained. This may represent an important tool both for cognitive development in robots and for the understanding of the sense of object permanence in humans.\",\"PeriodicalId\":236346,\"journal\":{\"name\":\"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2018.8761015\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2018.8761015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

摘要

这项工作研究了预测模型在实现机器人基本认知技能中的作用,例如区分自我生成的动作和由其他个体生成的动作的能力,以及维持增强的内部视觉表征世界的能力,在增强的图像中,机器人自己身体覆盖的物体可能在增强的图像中可见。为此目的采取了发展的办法。特别是,人形机器人正在学习,通过自我探索行为,自我产生运动的感官后果(在视觉领域)。生成的感觉运动体验被用作深度卷积神经网络的训练数据,该网络将本体感觉和运动数据(例如初始手臂关节位置和应用的运动命令)映射到这些动作的视觉结果上。该正演模型在两个实验中得到应用。首先,生成自生成运动的视觉预测,将其与实际视觉感知进行比较,然后用于计算预测误差。与机器人只观察自己的情况相比,当有外部主体在执行动作时,这个误差会更高。这支持了这样一种观点,即预测错误可以作为区分自我和他人的线索,这是代理感的基本先决条件。其次,我们展示了如何使用预测来减弱自我产生的运动,从而创建增强的视觉感知,其中物体的视线-最初被机器人身体遮挡-仍然保持。这对于机器人的认知发展和人类对物体永恒感的理解都是一个重要的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Deep Convolutional Neural Network Model for Sense of Agency and Object Permanence in Robots
This work investigates the role of predictive models in the implementation of basic cognitive skills in robots, such as the capability to distinguish between self-generated actions and those generated by other individuals and the capability to maintain an enhanced internal visual representation of the world, where objects covered by the robot's own body in the original image may be visible in the enhanced one. A developmental approach is adopted for this purpose. In particular, a humanoid robot is learning, through a self-exploration behaviour, the sensory consequences (in the visual domain) of self-generated movements. The generated sensorimotor experience is used as training data for a deep convolutional neural network that maps proprioceptive and motor data (e.g. initial arm joint positions and applied motor commands) onto the visual consequences of these actions. This forward model is then used in two experiments. First, for generating visual predictions of self-generated movements, which are compared to actual visual perceptions and then used to compute a prediction error. This error is shown to be higher when there is an external subject performing actions, compared to situations where the robot is observing only itself. This supports the idea that prediction errors may serve as a cue for distinguishing between self and other, a fundamental prerequisite for the sense of agency. Secondly, we show how predictions can be used to attenuate self-generated movements, and thus create enhanced visual perceptions, where the sight of objects - originally occluded by the robot body - is still maintained. This may represent an important tool both for cognitive development in robots and for the understanding of the sense of object permanence in humans.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信