{"title":"Graph-QMIX: Addressing the Partial Observation Issues via Graph Neural Network in Multi-Agent Reinforcement Learning","authors":"Duoning Pan, Dou An, Ruining Zhang","doi":"10.1109/YAC57282.2022.10023781","DOIUrl":null,"url":null,"abstract":"In recent years, with the development of multiagent reinforcement learning, more and more complex tasks have been solved. However, today’s multi-agent reinforcement learning faces two challenges: 1) the global state is always used to train the neural network, which is hard to obtain in the real-world; 2) compared to the global state, concatenating local observations decreases the performance of multi-agent reinforcement learning algorithms. These challenges make it difficult to apply multi-agent reinforcement learning algorithms in real-world scenarios. To solve these challenges, we proposed the Graph-QMIX algorithm, where all agents are seen as a graph, and the graph convolutional neural network is used to integrate the local observations of the agents. We evaluate our method in map 2s vs lsc and map 10m vs 11m of SMAC environment. Empirically simulation results show that our method reaches a strong performance as much as QMIX using the global state, and is much stronger than QMIX using the concatenating local observations.","PeriodicalId":272227,"journal":{"name":"2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/YAC57282.2022.10023781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, with the development of multiagent reinforcement learning, more and more complex tasks have been solved. However, today’s multi-agent reinforcement learning faces two challenges: 1) the global state is always used to train the neural network, which is hard to obtain in the real-world; 2) compared to the global state, concatenating local observations decreases the performance of multi-agent reinforcement learning algorithms. These challenges make it difficult to apply multi-agent reinforcement learning algorithms in real-world scenarios. To solve these challenges, we proposed the Graph-QMIX algorithm, where all agents are seen as a graph, and the graph convolutional neural network is used to integrate the local observations of the agents. We evaluate our method in map 2s vs lsc and map 10m vs 11m of SMAC environment. Empirically simulation results show that our method reaches a strong performance as much as QMIX using the global state, and is much stronger than QMIX using the concatenating local observations.
近年来,随着多智能体强化学习的发展,解决了越来越多复杂的任务。然而,当今的多智能体强化学习面临着两个挑战:1)神经网络训练总是使用全局状态,这在现实世界中很难获得;2)与全局状态相比,串联局部观测降低了多智能体强化学习算法的性能。这些挑战使得在现实场景中应用多智能体强化学习算法变得困难。为了解决这些挑战,我们提出了graph - qmix算法,将所有智能体视为一个图,并使用图卷积神经网络来整合智能体的局部观测值。我们在SMAC环境的地图2s vs lsc和地图10m vs 11m中评估了我们的方法。经验模拟结果表明,我们的方法达到了与使用全局状态的QMIX一样强的性能,并且比使用串联局部观测的QMIX强得多。