An acquisition of the relation between vision and action using self-organizing map and reinforcement learning

K. Terada, Hideaki Takeda, T. Nishida
{"title":"An acquisition of the relation between vision and action using self-organizing map and reinforcement learning","authors":"K. Terada, Hideaki Takeda, T. Nishida","doi":"10.1109/KES.1998.725881","DOIUrl":null,"url":null,"abstract":"An agent must acquire internal representation appropriate for its task, environment, and sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors are often confronted with the critical problem of how to build a necessary and sufficient state space for the agent to execute the task. We propose the acquisition of a relation between vision and action using the visual state-action map (VSAM). VSAM is the application of a self-organizing map (SOM). Input image data is mapped on the node of the learned VSAM. Then VSAM outputs the appropriate action for the state. We applied VSAM to a real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.","PeriodicalId":394492,"journal":{"name":"1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KES.1998.725881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

An agent must acquire internal representation appropriate for its task, environment, and sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors are often confronted with the critical problem of how to build a necessary and sufficient state space for the agent to execute the task. We propose the acquisition of a relation between vision and action using the visual state-action map (VSAM). VSAM is the application of a self-organizing map (SOM). Input image data is mapped on the node of the learned VSAM. Then VSAM outputs the appropriate action for the state. We applied VSAM to a real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.
利用自组织图和强化学习获取视觉和动作之间的关系
代理必须获得适合其任务、环境和传感器的内部表示。强化学习作为一种学习算法,常用于获取感官输入与动作之间的关系。在现实世界中,使用视觉传感器的学习智能体经常面临着如何为智能体建立一个必要和充分的状态空间来执行任务的关键问题。我们提出使用视觉状态-动作映射(VSAM)来获取视觉和动作之间的关系。VSAM是一个自组织映射(SOM)的应用。输入的镜像数据映射到学习到的VSAM节点上。然后VSAM为该状态输出相应的操作。我们将VSAM应用于一个真正的机器人。实验结果表明,真正的机器人在环境中移动时能够避开墙壁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信