{"title":"基于组合决策树的异构分层合作学习","authors":"M. Asadpour, M. N. Ahmadabadi, R. Siegwart","doi":"10.1109/IROS.2006.281990","DOIUrl":null,"url":null,"abstract":"Decision trees, being human readable and hierarchically structured, provide a suitable mean to derive state-space abstraction and simplify the inclusion of the available knowledge for a reinforcement learning (RL) agent. In this paper, we address two approaches to combine and purify the available knowledge in the abstraction trees, stored among different RL agents in a multi-agent system, or among the decision trees learned by the same agent using different methods. Simulation results in nondeterministic football learning task provide strong evidences for enhancement in convergence rate and policy performance","PeriodicalId":237562,"journal":{"name":"2006 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Heterogeneous and Hierarchical Cooperative Learning via Combining Decision Trees\",\"authors\":\"M. Asadpour, M. N. Ahmadabadi, R. Siegwart\",\"doi\":\"10.1109/IROS.2006.281990\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decision trees, being human readable and hierarchically structured, provide a suitable mean to derive state-space abstraction and simplify the inclusion of the available knowledge for a reinforcement learning (RL) agent. In this paper, we address two approaches to combine and purify the available knowledge in the abstraction trees, stored among different RL agents in a multi-agent system, or among the decision trees learned by the same agent using different methods. Simulation results in nondeterministic football learning task provide strong evidences for enhancement in convergence rate and policy performance\",\"PeriodicalId\":237562,\"journal\":{\"name\":\"2006 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2006.281990\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2006.281990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Heterogeneous and Hierarchical Cooperative Learning via Combining Decision Trees
Decision trees, being human readable and hierarchically structured, provide a suitable mean to derive state-space abstraction and simplify the inclusion of the available knowledge for a reinforcement learning (RL) agent. In this paper, we address two approaches to combine and purify the available knowledge in the abstraction trees, stored among different RL agents in a multi-agent system, or among the decision trees learned by the same agent using different methods. Simulation results in nondeterministic football learning task provide strong evidences for enhancement in convergence rate and policy performance