Zeinab Daavarani Asl, V. Derhami, Mehdi Yazdian-Dehkordi
{"title":"A new approach on multi-agent Multi-Objective Reinforcement Learning based on agents' preferences","authors":"Zeinab Daavarani Asl, V. Derhami, Mehdi Yazdian-Dehkordi","doi":"10.1109/AISP.2017.8324111","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) is a powerful machine learning paradigm for solving Markov Decision Process (MDP). Traditional RL algorithms aim to solve one-objective problems, but many real-world problems have more than one objective which conflict each other. In recent years, Multi-Objective Reinforcement Learning (MORL) algorithms, which employ a reward vector instead of a scalar reward signal, have been proposed to solve multi-objective problems. In MORL, because of conflicting objectives, there is no one optimal solution and a set of solutions named Pareto Front will be learned. In this paper, we proposed a new multi-agent method, which uses a shared Q-table for all agents to solve bi-objective problems. However, each agent selects actions based on its preference. These preferences are different with each other and the agents reach to Pareto Front solutions based on this preferences. The proposed method is simple in understanding and its computational cost is very low. Moreover, after finding the Pareto Front set, we can easily track the policy. Simulation results show that our proposed method outperforms the available methods in the term of learning speed.","PeriodicalId":386952,"journal":{"name":"2017 Artificial Intelligence and Signal Processing Conference (AISP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Artificial Intelligence and Signal Processing Conference (AISP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AISP.2017.8324111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Reinforcement Learning (RL) is a powerful machine learning paradigm for solving Markov Decision Process (MDP). Traditional RL algorithms aim to solve one-objective problems, but many real-world problems have more than one objective which conflict each other. In recent years, Multi-Objective Reinforcement Learning (MORL) algorithms, which employ a reward vector instead of a scalar reward signal, have been proposed to solve multi-objective problems. In MORL, because of conflicting objectives, there is no one optimal solution and a set of solutions named Pareto Front will be learned. In this paper, we proposed a new multi-agent method, which uses a shared Q-table for all agents to solve bi-objective problems. However, each agent selects actions based on its preference. These preferences are different with each other and the agents reach to Pareto Front solutions based on this preferences. The proposed method is simple in understanding and its computational cost is very low. Moreover, after finding the Pareto Front set, we can easily track the policy. Simulation results show that our proposed method outperforms the available methods in the term of learning speed.