Modified Annealed Adversarial Bonus for Adversarially Guided Actor-Critic

Qian Zhao, Fanyu Zeng, Mao Xu, Jinhui Han
{"title":"Modified Annealed Adversarial Bonus for Adversarially Guided Actor-Critic","authors":"Qian Zhao, Fanyu Zeng, Mao Xu, Jinhui Han","doi":"10.1109/YAC57282.2022.10023796","DOIUrl":null,"url":null,"abstract":"This paper investigates learning efficiency for rein-forcement learning in procedurally generated environments. A more sophisticated method is proposed to adjust the adversarial bonus to promote learning efficiency instead of the linearly decayed scheme in adversarially guided actor-critic. Our method considers the relationship between the bonus adjustment and the learning procedure. In some environments, if an agent performs better in learning, the agent will reach the goal with fewer steps. If the length of the episode decreases, the adversarial bonus will be reduced in our method. In this way, the learning efficiency has been improved in some procedurally generated tasks. Several experiments are implemented in MiniGrid to verify the proposed method. In the experiments, the proposed method outperforms the existing adversarially guided methods in several challenging procedurally-generated tasks.","PeriodicalId":272227,"journal":{"name":"2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/YAC57282.2022.10023796","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper investigates learning efficiency for rein-forcement learning in procedurally generated environments. A more sophisticated method is proposed to adjust the adversarial bonus to promote learning efficiency instead of the linearly decayed scheme in adversarially guided actor-critic. Our method considers the relationship between the bonus adjustment and the learning procedure. In some environments, if an agent performs better in learning, the agent will reach the goal with fewer steps. If the length of the episode decreases, the adversarial bonus will be reduced in our method. In this way, the learning efficiency has been improved in some procedurally generated tasks. Several experiments are implemented in MiniGrid to verify the proposed method. In the experiments, the proposed method outperforms the existing adversarially guided methods in several challenging procedurally-generated tasks.
修正了对抗性引导的演员评论家的退火对抗性加成
本文研究了程序生成环境中强化学习的学习效率。提出了一种更复杂的方法来调整对抗奖励以提高学习效率,而不是对抗引导的线性衰减方案。该方法考虑了奖金调整与学习过程之间的关系。在某些环境中,如果智能体在学习中表现得更好,则智能体将以更少的步骤达到目标。如果情节的长度减少,在我们的方法中对抗奖励也会减少。通过这种方法,在一些过程生成任务中提高了学习效率。在MiniGrid中进行了实验验证了该方法。在实验中,该方法在几个具有挑战性的过程生成任务中优于现有的对抗引导方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信