{"title":"多重追逃马尔可夫博弈的分散学习","authors":"S. Givigi, H. Schwartz","doi":"10.1109/MED.2011.5983135","DOIUrl":null,"url":null,"abstract":"We represent the multiple pursuers and evaders game as a Markov game and each player as a decentralized unit that has to work independently in order to complete a task. Most proposed solutions for this distributed multiagent decision problem require some sort of central coordination. In this paper, we intend to model each player as a learning automata (LA) and let them evolve and adapt in order to solve the difficult problem they have at hand. We are also going to show that using the proposed learning process, the players' policies will converge to an equilibrium point. Simulations of such scenarios with multiple pursuers and evaders are presented in order to show the feasibility of the approach.","PeriodicalId":146203,"journal":{"name":"2011 19th Mediterranean Conference on Control & Automation (MED)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Decentralized learning in multiple pursuer-evader Markov games\",\"authors\":\"S. Givigi, H. Schwartz\",\"doi\":\"10.1109/MED.2011.5983135\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We represent the multiple pursuers and evaders game as a Markov game and each player as a decentralized unit that has to work independently in order to complete a task. Most proposed solutions for this distributed multiagent decision problem require some sort of central coordination. In this paper, we intend to model each player as a learning automata (LA) and let them evolve and adapt in order to solve the difficult problem they have at hand. We are also going to show that using the proposed learning process, the players' policies will converge to an equilibrium point. Simulations of such scenarios with multiple pursuers and evaders are presented in order to show the feasibility of the approach.\",\"PeriodicalId\":146203,\"journal\":{\"name\":\"2011 19th Mediterranean Conference on Control & Automation (MED)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 19th Mediterranean Conference on Control & Automation (MED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MED.2011.5983135\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 19th Mediterranean Conference on Control & Automation (MED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MED.2011.5983135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decentralized learning in multiple pursuer-evader Markov games
We represent the multiple pursuers and evaders game as a Markov game and each player as a decentralized unit that has to work independently in order to complete a task. Most proposed solutions for this distributed multiagent decision problem require some sort of central coordination. In this paper, we intend to model each player as a learning automata (LA) and let them evolve and adapt in order to solve the difficult problem they have at hand. We are also going to show that using the proposed learning process, the players' policies will converge to an equilibrium point. Simulations of such scenarios with multiple pursuers and evaders are presented in order to show the feasibility of the approach.