{"title":"将选项集成到MAXQ中的多智能体分层强化学习","authors":"Jing Shen, Guochang Gu, Haibo Liu","doi":"10.1109/IMSCCS.2006.90","DOIUrl":null,"url":null,"abstract":"MAXQ is a new framework for multi-agent reinforcement learning. But the MAXQ framework cannot decompose all subtasks into more refined hierarchies and the hierarchies are difficult to be discovered automatically. In this paper, a multi-agent hierarchical reinforcement learning approach, named OptMAXQ, by integrating Options into MAXQ is presented. In the OptMAXQ framework, the MAXQ framework is used to introduce knowledge into reinforcement learning and the option framework is used to construct hierarchies automatically. The performance of OptMAXQ is demonstrated in two-robot trash collection task and compared with MAXQ. The simulation results show that the OptMAXQ is more practical than MAXQ in partial known environment","PeriodicalId":202629,"journal":{"name":"First International Multi-Symposiums on Computer and Computational Sciences (IMSCCS'06)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Multi-Agent Hierarchical Reinforcement Learning by Integrating Options into MAXQ\",\"authors\":\"Jing Shen, Guochang Gu, Haibo Liu\",\"doi\":\"10.1109/IMSCCS.2006.90\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"MAXQ is a new framework for multi-agent reinforcement learning. But the MAXQ framework cannot decompose all subtasks into more refined hierarchies and the hierarchies are difficult to be discovered automatically. In this paper, a multi-agent hierarchical reinforcement learning approach, named OptMAXQ, by integrating Options into MAXQ is presented. In the OptMAXQ framework, the MAXQ framework is used to introduce knowledge into reinforcement learning and the option framework is used to construct hierarchies automatically. The performance of OptMAXQ is demonstrated in two-robot trash collection task and compared with MAXQ. The simulation results show that the OptMAXQ is more practical than MAXQ in partial known environment\",\"PeriodicalId\":202629,\"journal\":{\"name\":\"First International Multi-Symposiums on Computer and Computational Sciences (IMSCCS'06)\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"First International Multi-Symposiums on Computer and Computational Sciences (IMSCCS'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IMSCCS.2006.90\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"First International Multi-Symposiums on Computer and Computational Sciences (IMSCCS'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMSCCS.2006.90","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-Agent Hierarchical Reinforcement Learning by Integrating Options into MAXQ
MAXQ is a new framework for multi-agent reinforcement learning. But the MAXQ framework cannot decompose all subtasks into more refined hierarchies and the hierarchies are difficult to be discovered automatically. In this paper, a multi-agent hierarchical reinforcement learning approach, named OptMAXQ, by integrating Options into MAXQ is presented. In the OptMAXQ framework, the MAXQ framework is used to introduce knowledge into reinforcement learning and the option framework is used to construct hierarchies automatically. The performance of OptMAXQ is demonstrated in two-robot trash collection task and compared with MAXQ. The simulation results show that the OptMAXQ is more practical than MAXQ in partial known environment