近端策略优化的多目标探索

Nguyen Do Hoang Khoi, Cuong Pham Van, Hoang Vu Tran, C. Truong
{"title":"近端策略优化的多目标探索","authors":"Nguyen Do Hoang Khoi, Cuong Pham Van, Hoang Vu Tran, C. Truong","doi":"10.1109/ATiGB50996.2021.9423319","DOIUrl":null,"url":null,"abstract":"In Reinforcement Learning, the reward is one of the main components to optimize the strategy. While other approaches are based on a simple scalar reward to get an optimal policy, we propose a model learning the designated reward in numerous conditions. Our method, which we call multi-objective exploration for proximal policy optimization (MOE-PPO), alleviates the dependence on the reward design by executing the Preferent Surrogate Objective (PSO). We also make full use of Curiosity Driven Exploration to increase exploration ability. Our experiments test MOE-PPO in the Super Mario Bros environment designed by OpenAIGym with three criteria to illustrate our approach's effectiveness. The result shows that MOE-PPO outperforms other on-policy algorithms under many conditions.","PeriodicalId":6690,"journal":{"name":"2020 Applying New Technology in Green Buildings (ATiGB)","volume":"14 1","pages":"105-109"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Multi-Objective Exploration for Proximal Policy Optimization\",\"authors\":\"Nguyen Do Hoang Khoi, Cuong Pham Van, Hoang Vu Tran, C. Truong\",\"doi\":\"10.1109/ATiGB50996.2021.9423319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In Reinforcement Learning, the reward is one of the main components to optimize the strategy. While other approaches are based on a simple scalar reward to get an optimal policy, we propose a model learning the designated reward in numerous conditions. Our method, which we call multi-objective exploration for proximal policy optimization (MOE-PPO), alleviates the dependence on the reward design by executing the Preferent Surrogate Objective (PSO). We also make full use of Curiosity Driven Exploration to increase exploration ability. Our experiments test MOE-PPO in the Super Mario Bros environment designed by OpenAIGym with three criteria to illustrate our approach's effectiveness. The result shows that MOE-PPO outperforms other on-policy algorithms under many conditions.\",\"PeriodicalId\":6690,\"journal\":{\"name\":\"2020 Applying New Technology in Green Buildings (ATiGB)\",\"volume\":\"14 1\",\"pages\":\"105-109\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Applying New Technology in Green Buildings (ATiGB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATiGB50996.2021.9423319\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Applying New Technology in Green Buildings (ATiGB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATiGB50996.2021.9423319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在强化学习中,奖励是优化策略的主要组成部分之一。虽然其他方法是基于简单的标量奖励来获得最优策略,但我们提出了一个在多种条件下学习指定奖励的模型。我们的方法,我们称之为多目标探索近端策略优化(MOE-PPO),通过执行优先代理目标(PSO)来减轻对奖励设计的依赖。我们还充分利用好奇心驱动的探索来提高探索能力。我们在OpenAIGym设计的《超级马里奥兄弟》环境中测试了MOE-PPO,并采用了三个标准来说明我们方法的有效性。结果表明,在许多条件下,MOE-PPO算法都优于其他策略上算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-Objective Exploration for Proximal Policy Optimization
In Reinforcement Learning, the reward is one of the main components to optimize the strategy. While other approaches are based on a simple scalar reward to get an optimal policy, we propose a model learning the designated reward in numerous conditions. Our method, which we call multi-objective exploration for proximal policy optimization (MOE-PPO), alleviates the dependence on the reward design by executing the Preferent Surrogate Objective (PSO). We also make full use of Curiosity Driven Exploration to increase exploration ability. Our experiments test MOE-PPO in the Super Mario Bros environment designed by OpenAIGym with three criteria to illustrate our approach's effectiveness. The result shows that MOE-PPO outperforms other on-policy algorithms under many conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信