{"title":"Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties","authors":"Jonathan Cohen, A. Mouaddib","doi":"10.1109/ICTAI.2018.00077","DOIUrl":null,"url":null,"abstract":"Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2018.00077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.