{"title":"Exception-based reinforcement learning","authors":"Pascal Garcia","doi":"10.1109/IECON.2001.975612","DOIUrl":null,"url":null,"abstract":"In this paper we develop a method using temporally abstract actions to solve Markov decision processes. The basic idea of our method is to define some kind of procedures to control the agent's behavior. These procedures contain a rule constraining actions the agent has to choose. This rule is applied except if some conditions (which we call exceptions) are fulfilled. In this case we relax constraints on actions. We develop a way to propagate states that have created an exception to a rule, to help the agent to escape from blocked situations or locally optimal solutions. We illustrate the method using the \"Sokoban\" game. We compare the method empirically with flat Q-learning. On the proposed tests, learning time is drastically reduced as is the memory required to save the Q-values.","PeriodicalId":345608,"journal":{"name":"IECON'01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.37243)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IECON'01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.37243)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IECON.2001.975612","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper we develop a method using temporally abstract actions to solve Markov decision processes. The basic idea of our method is to define some kind of procedures to control the agent's behavior. These procedures contain a rule constraining actions the agent has to choose. This rule is applied except if some conditions (which we call exceptions) are fulfilled. In this case we relax constraints on actions. We develop a way to propagate states that have created an exception to a rule, to help the agent to escape from blocked situations or locally optimal solutions. We illustrate the method using the "Sokoban" game. We compare the method empirically with flat Q-learning. On the proposed tests, learning time is drastically reduced as is the memory required to save the Q-values.