{"title":"Suggestion of probabilistic reward-independent knowledge for dynamic environment in reinforcement learning","authors":"Nodoka Shibuya, Yoshiki Miyazaki, K. Kurashige","doi":"10.1109/MHS.2011.6102175","DOIUrl":null,"url":null,"abstract":"Recently, reinforcement learning attracts attention as the learning technique that is often used on actual robot. As one of problems of reinforcement learning, it is difficult for reinforcement learning to cope with changing purpose, because reinforcement learning depend on reward. Until now, we suggested that we learned to use information does not depend on reward for solving the problem. This information is environmental transition. We defined this information as “Reward-Independent Knowledge (RIK)”. A robot gets RIK and predicts route from initial state to purpose state by using RIK. Reinforcement learning can cope with changing purpose by using RIK. However, it is difficult for RIK to cope with dynamic environment, because RIK is one to one correspondence between state-action pair and next state. Therefore, we suggest that RIK has multiple next state and probability of each possible next state. In this paper, we perform an experiment by simulation. We show that suggested knowledge copes with changing purpose and dynamic environment. In this experiment, we adopt a maze problem which a goal change and changing structure of maze. By this, we will show that suggested knowledge can cope with changing purpose and dynamic environment.","PeriodicalId":286457,"journal":{"name":"2011 International Symposium on Micro-NanoMechatronics and Human Science","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 International Symposium on Micro-NanoMechatronics and Human Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MHS.2011.6102175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, reinforcement learning attracts attention as the learning technique that is often used on actual robot. As one of problems of reinforcement learning, it is difficult for reinforcement learning to cope with changing purpose, because reinforcement learning depend on reward. Until now, we suggested that we learned to use information does not depend on reward for solving the problem. This information is environmental transition. We defined this information as “Reward-Independent Knowledge (RIK)”. A robot gets RIK and predicts route from initial state to purpose state by using RIK. Reinforcement learning can cope with changing purpose by using RIK. However, it is difficult for RIK to cope with dynamic environment, because RIK is one to one correspondence between state-action pair and next state. Therefore, we suggest that RIK has multiple next state and probability of each possible next state. In this paper, we perform an experiment by simulation. We show that suggested knowledge copes with changing purpose and dynamic environment. In this experiment, we adopt a maze problem which a goal change and changing structure of maze. By this, we will show that suggested knowledge can cope with changing purpose and dynamic environment.