{"title":"An acquisition of the relation between vision and action using self-organizing map and reinforcement learning","authors":"K. Terada, Hideaki Takeda, T. Nishida","doi":"10.1109/KES.1998.725881","DOIUrl":null,"url":null,"abstract":"An agent must acquire internal representation appropriate for its task, environment, and sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors are often confronted with the critical problem of how to build a necessary and sufficient state space for the agent to execute the task. We propose the acquisition of a relation between vision and action using the visual state-action map (VSAM). VSAM is the application of a self-organizing map (SOM). Input image data is mapped on the node of the learned VSAM. Then VSAM outputs the appropriate action for the state. We applied VSAM to a real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.","PeriodicalId":394492,"journal":{"name":"1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KES.1998.725881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
An agent must acquire internal representation appropriate for its task, environment, and sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors are often confronted with the critical problem of how to build a necessary and sufficient state space for the agent to execute the task. We propose the acquisition of a relation between vision and action using the visual state-action map (VSAM). VSAM is the application of a self-organizing map (SOM). Input image data is mapped on the node of the learned VSAM. Then VSAM outputs the appropriate action for the state. We applied VSAM to a real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.