{"title":"Demonstration of expert knowledge injection in Fuzzy Rule Interpolation based Q-learning","authors":"T. Tompa, S. Kovács, D. Vincze, M. Niitsuma","doi":"10.1109/IEEECONF49454.2021.9382734","DOIUrl":null,"url":null,"abstract":"The learning phase of the traditional reinforcement learning methods can be started without any preliminary knowledge about the problem needed to be solved. The problem related knowledge-base is built based on the reinforcement signals of the environment during the trial and error style learning phase. If a portion of the a priori knowledge about the problem solution is available and if it could be injected into the initial knowledge of the reinforcement learning system, then the learning performance (and the learning ability of an agent) could be significantly improved. The goal of this paper is to highlight the effect of the external expert knowledge inclusion into the Fuzzy Rule Interpolation based Q-learning (FRIQ-learning) method, by briefly introducing a way for expert knowledge injection into FRIQ-learning and a discussion based on simulated runs of a practical benchmark example. The investigations presented here can aid in the designing of behaviour-based robot control systems, in such cases where the available expert knowledge is not enough by itself to construct a sufficiently working system.","PeriodicalId":395378,"journal":{"name":"2021 IEEE/SICE International Symposium on System Integration (SII)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/SICE International Symposium on System Integration (SII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECONF49454.2021.9382734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
The learning phase of the traditional reinforcement learning methods can be started without any preliminary knowledge about the problem needed to be solved. The problem related knowledge-base is built based on the reinforcement signals of the environment during the trial and error style learning phase. If a portion of the a priori knowledge about the problem solution is available and if it could be injected into the initial knowledge of the reinforcement learning system, then the learning performance (and the learning ability of an agent) could be significantly improved. The goal of this paper is to highlight the effect of the external expert knowledge inclusion into the Fuzzy Rule Interpolation based Q-learning (FRIQ-learning) method, by briefly introducing a way for expert knowledge injection into FRIQ-learning and a discussion based on simulated runs of a practical benchmark example. The investigations presented here can aid in the designing of behaviour-based robot control systems, in such cases where the available expert knowledge is not enough by itself to construct a sufficiently working system.