零和马尔可夫博弈中扩展分类器系统的强化学习

Chang Wang, Hao Chen, Chao Yan, Xiaojia Xiang
{"title":"零和马尔可夫博弈中扩展分类器系统的强化学习","authors":"Chang Wang, Hao Chen, Chao Yan, Xiaojia Xiang","doi":"10.1109/AGENTS.2019.8929148","DOIUrl":null,"url":null,"abstract":"A reinforcement learning (RL) agent can learn how to win against an opponent agent in zero-sum Markov Games after episodes of training. However, it is still challenging for the RL agent to acquire the optimal policy if the opponent agent is also able to learn concurrently. In this paper, we propose a new RL algorithm based on the eXtended Classifier System (XCS) that maintains a population of competing rules for action selection and uses the genetic algorithm (GA) to evolve the rules for searching the optimal policy. The RL agent can learn from scratch by observing the behaviors of the opponent agent without making any assumptions about the policy of the RL agent or the opponent agent. In addition, we use eligibility trace to further speed up the learning process. We demonstrate the performance of the proposed algorithm by comparing it with several benchmark algorithms in an adversarial soccer game against the same deterministic policy learner.","PeriodicalId":235878,"journal":{"name":"2019 IEEE International Conference on Agents (ICA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Reinforcement Learning with an Extended Classifier System in Zero-sum Markov Games\",\"authors\":\"Chang Wang, Hao Chen, Chao Yan, Xiaojia Xiang\",\"doi\":\"10.1109/AGENTS.2019.8929148\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A reinforcement learning (RL) agent can learn how to win against an opponent agent in zero-sum Markov Games after episodes of training. However, it is still challenging for the RL agent to acquire the optimal policy if the opponent agent is also able to learn concurrently. In this paper, we propose a new RL algorithm based on the eXtended Classifier System (XCS) that maintains a population of competing rules for action selection and uses the genetic algorithm (GA) to evolve the rules for searching the optimal policy. The RL agent can learn from scratch by observing the behaviors of the opponent agent without making any assumptions about the policy of the RL agent or the opponent agent. In addition, we use eligibility trace to further speed up the learning process. We demonstrate the performance of the proposed algorithm by comparing it with several benchmark algorithms in an adversarial soccer game against the same deterministic policy learner.\",\"PeriodicalId\":235878,\"journal\":{\"name\":\"2019 IEEE International Conference on Agents (ICA)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Agents (ICA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AGENTS.2019.8929148\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Agents (ICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AGENTS.2019.8929148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

强化学习(RL)智能体可以在训练后学习如何在零和马尔可夫博弈中战胜对手。然而,如果对手智能体也能同时学习,那么RL智能体获取最优策略仍然是一个挑战。本文提出了一种新的基于扩展分类器系统(XCS)的强化学习算法,该算法维护一群竞争规则进行动作选择,并使用遗传算法(GA)来进化规则以搜索最优策略。RL代理可以通过观察对手代理的行为从零开始学习,而无需对RL代理或对手代理的策略做出任何假设。此外,我们使用资格跟踪来进一步加快学习过程。我们通过将所提出的算法与针对相同确定性策略学习器的对抗性足球比赛中的几种基准算法进行比较来证明所提出算法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement Learning with an Extended Classifier System in Zero-sum Markov Games
A reinforcement learning (RL) agent can learn how to win against an opponent agent in zero-sum Markov Games after episodes of training. However, it is still challenging for the RL agent to acquire the optimal policy if the opponent agent is also able to learn concurrently. In this paper, we propose a new RL algorithm based on the eXtended Classifier System (XCS) that maintains a population of competing rules for action selection and uses the genetic algorithm (GA) to evolve the rules for searching the optimal policy. The RL agent can learn from scratch by observing the behaviors of the opponent agent without making any assumptions about the policy of the RL agent or the opponent agent. In addition, we use eligibility trace to further speed up the learning process. We demonstrate the performance of the proposed algorithm by comparing it with several benchmark algorithms in an adversarial soccer game against the same deterministic policy learner.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信