{"title":"非静止环境下的多智能体战斗","authors":"Shengang Li, Haoang Chi, Tao Xie","doi":"10.1109/IJCNN52387.2021.9534036","DOIUrl":null,"url":null,"abstract":"Multi-agent combat is a combat scenario in multiagent reinforcement learning (MARL). In this combat, agents use reinforcement learning methods to learn optimal policies. Actually, policy may be changed, which leads to a non-stationary environment. In this case, it is difficult to predict opponents' policies. Many reinforcement learning methods try to solve nonstationary problems. Most of the previous works put all agents into a frame and model their policies to deal with non-stationarity of environments. But, in a combat environment, opponents can not be in the same frame as our agents. We group opponents and our agents into two frames, only considering opponents as a part of the environment. In this paper, we focus on the problem of modelling opponents' policies in non-stationary environments. To solve this problem, we propose an algorithm called Additional Opponent Characteristics Multi-agent Deep Deterministic Policy Gradient (AOC-MADDPG) with the following contributions: (1) We propose a new actor-critic framework to deal with nonstationarity of environments in MARL, so that agents can adapt to more complex environments. (2) A model for opponents' policies is built by introducing observations and actions of the opponents into the critic network as additional characteristics. We evaluate our AOC-MADDPG algorithm in two multi-agent combat environments. As a result, our approach significantly outperforms the baseline. Agents trained by our method can get higher rewards in non-stationary environments.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"456 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multi-Agent Combat in Non-Stationary Environments\",\"authors\":\"Shengang Li, Haoang Chi, Tao Xie\",\"doi\":\"10.1109/IJCNN52387.2021.9534036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-agent combat is a combat scenario in multiagent reinforcement learning (MARL). In this combat, agents use reinforcement learning methods to learn optimal policies. Actually, policy may be changed, which leads to a non-stationary environment. In this case, it is difficult to predict opponents' policies. Many reinforcement learning methods try to solve nonstationary problems. Most of the previous works put all agents into a frame and model their policies to deal with non-stationarity of environments. But, in a combat environment, opponents can not be in the same frame as our agents. We group opponents and our agents into two frames, only considering opponents as a part of the environment. In this paper, we focus on the problem of modelling opponents' policies in non-stationary environments. To solve this problem, we propose an algorithm called Additional Opponent Characteristics Multi-agent Deep Deterministic Policy Gradient (AOC-MADDPG) with the following contributions: (1) We propose a new actor-critic framework to deal with nonstationarity of environments in MARL, so that agents can adapt to more complex environments. (2) A model for opponents' policies is built by introducing observations and actions of the opponents into the critic network as additional characteristics. We evaluate our AOC-MADDPG algorithm in two multi-agent combat environments. As a result, our approach significantly outperforms the baseline. Agents trained by our method can get higher rewards in non-stationary environments.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"456 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9534036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9534036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-agent combat is a combat scenario in multiagent reinforcement learning (MARL). In this combat, agents use reinforcement learning methods to learn optimal policies. Actually, policy may be changed, which leads to a non-stationary environment. In this case, it is difficult to predict opponents' policies. Many reinforcement learning methods try to solve nonstationary problems. Most of the previous works put all agents into a frame and model their policies to deal with non-stationarity of environments. But, in a combat environment, opponents can not be in the same frame as our agents. We group opponents and our agents into two frames, only considering opponents as a part of the environment. In this paper, we focus on the problem of modelling opponents' policies in non-stationary environments. To solve this problem, we propose an algorithm called Additional Opponent Characteristics Multi-agent Deep Deterministic Policy Gradient (AOC-MADDPG) with the following contributions: (1) We propose a new actor-critic framework to deal with nonstationarity of environments in MARL, so that agents can adapt to more complex environments. (2) A model for opponents' policies is built by introducing observations and actions of the opponents into the critic network as additional characteristics. We evaluate our AOC-MADDPG algorithm in two multi-agent combat environments. As a result, our approach significantly outperforms the baseline. Agents trained by our method can get higher rewards in non-stationary environments.