Felipe Leno da Silva, R. Glatt, Anna Helena Reali Costa
{"title":"Object-Oriented Reinforcement Learning in Cooperative Multiagent Domains","authors":"Felipe Leno da Silva, R. Glatt, Anna Helena Reali Costa","doi":"10.1109/BRACIS.2016.015","DOIUrl":null,"url":null,"abstract":"Although Reinforcement Learning methods have successfully been applied to increasingly large problems, scalability remains a central issue. While Object-Oriented Markov Decision Processes (OO-MDP) are used to exploit regularities in a domain, Multiagent System (MAS) methods are used to divide workload amongst multiple agents. In this work we propose a novel combination of OO-MDP and MAS, called Multiagent Object-Oriented Markov Decision Process (MOO-MDP), so as to accrue the benefits of both strategies and be able to better address scalability issues. We present an algorithm to solve deterministic cooperative MOO-MDPs, and prove that it learns optimal policies while reducing the learning space by exploiting state abstractions. We experimentally compare our results with earlier approaches and show advantages with regard to discounted cumulative reward, number of steps to fulfill the task, and Q-table size.","PeriodicalId":183149,"journal":{"name":"2016 5th Brazilian Conference on Intelligent Systems (BRACIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 5th Brazilian Conference on Intelligent Systems (BRACIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BRACIS.2016.015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Although Reinforcement Learning methods have successfully been applied to increasingly large problems, scalability remains a central issue. While Object-Oriented Markov Decision Processes (OO-MDP) are used to exploit regularities in a domain, Multiagent System (MAS) methods are used to divide workload amongst multiple agents. In this work we propose a novel combination of OO-MDP and MAS, called Multiagent Object-Oriented Markov Decision Process (MOO-MDP), so as to accrue the benefits of both strategies and be able to better address scalability issues. We present an algorithm to solve deterministic cooperative MOO-MDPs, and prove that it learns optimal policies while reducing the learning space by exploiting state abstractions. We experimentally compare our results with earlier approaches and show advantages with regard to discounted cumulative reward, number of steps to fulfill the task, and Q-table size.