{"title":"基于角色的多智能体分散隐式合作pomdp方法","authors":"H. Zhang, Jie Chen, H. Fang, L. Dou","doi":"10.1109/ICCA.2017.8003110","DOIUrl":null,"url":null,"abstract":"Decentralized decision making with uncertainty is one of the fundamental challenges in multi-agent systems. Current approaches for multi-agent coordination which rely on continuous communication of team members, cannot be applied in the practical applications where loss of communication frequently occurs. For this problem, a role-based multi-agent model is presented for implicit coordination. The model utilizes the concept of role to decompose a mission into a set of single-agent partially observable Markov decision process (POMDPs) and a task optimal assignment. Each role-based model defined with responsibilities and rights of the role can be solved with the acceptable computational complexity. The prediction of teammates' actions is the key issue in implicit coordination, for that, an action prediction algorithm based on role-based model is proposed, which estimates the current belief state by Bayes estimation and calculates the prediction of further action by the role-based policy. After obtaining the clue of the actual action through observation, the deviation of prediction is revised by filtering the prediction set with the clue. Experimental results show the validity of the proposed approach under no communication coordination.","PeriodicalId":379025,"journal":{"name":"2017 13th IEEE International Conference on Control & Automation (ICCA)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A role-based POMDPs approach for decentralized implicit cooperation of multiple agents\",\"authors\":\"H. Zhang, Jie Chen, H. Fang, L. Dou\",\"doi\":\"10.1109/ICCA.2017.8003110\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decentralized decision making with uncertainty is one of the fundamental challenges in multi-agent systems. Current approaches for multi-agent coordination which rely on continuous communication of team members, cannot be applied in the practical applications where loss of communication frequently occurs. For this problem, a role-based multi-agent model is presented for implicit coordination. The model utilizes the concept of role to decompose a mission into a set of single-agent partially observable Markov decision process (POMDPs) and a task optimal assignment. Each role-based model defined with responsibilities and rights of the role can be solved with the acceptable computational complexity. The prediction of teammates' actions is the key issue in implicit coordination, for that, an action prediction algorithm based on role-based model is proposed, which estimates the current belief state by Bayes estimation and calculates the prediction of further action by the role-based policy. After obtaining the clue of the actual action through observation, the deviation of prediction is revised by filtering the prediction set with the clue. Experimental results show the validity of the proposed approach under no communication coordination.\",\"PeriodicalId\":379025,\"journal\":{\"name\":\"2017 13th IEEE International Conference on Control & Automation (ICCA)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 13th IEEE International Conference on Control & Automation (ICCA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCA.2017.8003110\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 13th IEEE International Conference on Control & Automation (ICCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCA.2017.8003110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A role-based POMDPs approach for decentralized implicit cooperation of multiple agents
Decentralized decision making with uncertainty is one of the fundamental challenges in multi-agent systems. Current approaches for multi-agent coordination which rely on continuous communication of team members, cannot be applied in the practical applications where loss of communication frequently occurs. For this problem, a role-based multi-agent model is presented for implicit coordination. The model utilizes the concept of role to decompose a mission into a set of single-agent partially observable Markov decision process (POMDPs) and a task optimal assignment. Each role-based model defined with responsibilities and rights of the role can be solved with the acceptable computational complexity. The prediction of teammates' actions is the key issue in implicit coordination, for that, an action prediction algorithm based on role-based model is proposed, which estimates the current belief state by Bayes estimation and calculates the prediction of further action by the role-based policy. After obtaining the clue of the actual action through observation, the deviation of prediction is revised by filtering the prediction set with the clue. Experimental results show the validity of the proposed approach under no communication coordination.