{"title":"CHQ:部分可观察马尔可夫决策过程的多智能体强化学习方案","authors":"Hiroshi Osada, S. Fujita","doi":"10.1093/ietisy/e88-d.5.1004","DOIUrl":null,"url":null,"abstract":"We propose a reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multiagent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The quality of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.","PeriodicalId":281008,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004).","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"CHQ: a multi-agent reinforcement learning scheme for partially observable Markov decision processes\",\"authors\":\"Hiroshi Osada, S. Fujita\",\"doi\":\"10.1093/ietisy/e88-d.5.1004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multiagent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The quality of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.\",\"PeriodicalId\":281008,\"journal\":{\"name\":\"Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004).\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004).\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ietisy/e88-d.5.1004\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004).","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ietisy/e88-d.5.1004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CHQ: a multi-agent reinforcement learning scheme for partially observable Markov decision processes
We propose a reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multiagent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The quality of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.