{"title":"Multi-Agent Evolutionary Reinforcement Learning Based on Cooperative Games","authors":"Jin Yu;Ya Zhang;Changyin Sun","doi":"10.1109/TETCI.2024.3452119","DOIUrl":null,"url":null,"abstract":"Despite the significant advancements in single-agent evolutionary reinforcement learning, research exploring evolutionary reinforcement learning within multi-agent systems is still in its nascent stage. The integration of evolutionary algorithms (EA) and reinforcement learning (RL) has partially mitigated RL's reliance on the environment and provided it with an ample supply of data. Nonetheless, existing studies primarily focus on the indirect collaboration between RL and EA, which lacks sufficient exploration on the effective balance of individual and team rewards. To address this problem, this study introduces game theory to establish a dynamic cooperation framework between EA and RL, and proposes a multi-agent evolutionary reinforcement learning based on cooperative games. This framework facilitates more efficient direct collaboration between RL and EA, enhancing individual rewards while ensuring the attainment of team objectives. Initially, a cooperative policy is formed through a joint network to simplify the parameters of each agent to speed up the overall training process. Subsequently, RL and EA engage in cooperative games to determine whether RL jointly optimizes the same policy based on Pareto optimal results. Lastly, through double objectives optimization, a balance between the two types of rewards is achieved, with EA focusing on team rewards and RL focusing on individual rewards. Experimental results demonstrate that the proposed algorithm outperforms its single-algorithm counterparts in terms of competitiveness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1650-1658"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10666798/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the significant advancements in single-agent evolutionary reinforcement learning, research exploring evolutionary reinforcement learning within multi-agent systems is still in its nascent stage. The integration of evolutionary algorithms (EA) and reinforcement learning (RL) has partially mitigated RL's reliance on the environment and provided it with an ample supply of data. Nonetheless, existing studies primarily focus on the indirect collaboration between RL and EA, which lacks sufficient exploration on the effective balance of individual and team rewards. To address this problem, this study introduces game theory to establish a dynamic cooperation framework between EA and RL, and proposes a multi-agent evolutionary reinforcement learning based on cooperative games. This framework facilitates more efficient direct collaboration between RL and EA, enhancing individual rewards while ensuring the attainment of team objectives. Initially, a cooperative policy is formed through a joint network to simplify the parameters of each agent to speed up the overall training process. Subsequently, RL and EA engage in cooperative games to determine whether RL jointly optimizes the same policy based on Pareto optimal results. Lastly, through double objectives optimization, a balance between the two types of rewards is achieved, with EA focusing on team rewards and RL focusing on individual rewards. Experimental results demonstrate that the proposed algorithm outperforms its single-algorithm counterparts in terms of competitiveness.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.