{"title":"多智能体觅食——基于搜索的q学习的进一步发展","authors":"S. Hayat, M. Niazi","doi":"10.1109/ICET.2005.1558883","DOIUrl":null,"url":null,"abstract":"The paper discusses a foraging model which accomplishes coordination obliged tasks. This is done through communication techniques and by learning from and about other agents in a confined, previously unseen environment. A new reinforcement learning technique, Q-Learning with search has been proposed. It is shown to boost the convergence of optimal paths learnt by the agents as compared to traditional QLearning. Different foraging tasks are solved requiring varying degree of collective and individual efforts using the new proposed mechanism. The model enables us to characterize the ability of agents to solve complex foraging tasks rapidly and effectively.","PeriodicalId":222828,"journal":{"name":"Proceedings of the IEEE Symposium on Emerging Technologies, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Multi agent foraging - taking a step further Q-leaming with search\",\"authors\":\"S. Hayat, M. Niazi\",\"doi\":\"10.1109/ICET.2005.1558883\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper discusses a foraging model which accomplishes coordination obliged tasks. This is done through communication techniques and by learning from and about other agents in a confined, previously unseen environment. A new reinforcement learning technique, Q-Learning with search has been proposed. It is shown to boost the convergence of optimal paths learnt by the agents as compared to traditional QLearning. Different foraging tasks are solved requiring varying degree of collective and individual efforts using the new proposed mechanism. The model enables us to characterize the ability of agents to solve complex foraging tasks rapidly and effectively.\",\"PeriodicalId\":222828,\"journal\":{\"name\":\"Proceedings of the IEEE Symposium on Emerging Technologies, 2005.\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the IEEE Symposium on Emerging Technologies, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICET.2005.1558883\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE Symposium on Emerging Technologies, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICET.2005.1558883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi agent foraging - taking a step further Q-leaming with search
The paper discusses a foraging model which accomplishes coordination obliged tasks. This is done through communication techniques and by learning from and about other agents in a confined, previously unseen environment. A new reinforcement learning technique, Q-Learning with search has been proposed. It is shown to boost the convergence of optimal paths learnt by the agents as compared to traditional QLearning. Different foraging tasks are solved requiring varying degree of collective and individual efforts using the new proposed mechanism. The model enables us to characterize the ability of agents to solve complex foraging tasks rapidly and effectively.