EASpace: Enhanced Action Space for Policy Transfer.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zheng Zhang, Qingrui Zhang, Bo Zhu, Xiaohan Wang, Tianjiang Hu
{"title":"EASpace: Enhanced Action Space for Policy Transfer.","authors":"Zheng Zhang, Qingrui Zhang, Bo Zhu, Xiaohan Wang, Tianjiang Hu","doi":"10.1109/TNNLS.2023.3322591","DOIUrl":null,"url":null,"abstract":"<p><p>Formulating expert policies as macro actions promises to alleviate the long-horizon issue via structured exploration and efficient credit assignment. However, traditional option-based multipolicy transfer methods suffer from inefficient exploration of macro action's length and insufficient exploitation of useful long-duration macro actions. In this article, a novel algorithm named enhanced action space (EASpace) is proposed, which formulates macro actions in an alternative form to accelerate the learning process using multiple available suboptimal expert policies. Specifically, EASpace formulates each expert policy into multiple macro actions with different execution times. All the macro actions are then integrated into the primitive action space directly. An intrinsic reward, which is proportional to the execution time of macro actions, is introduced to encourage the exploitation of useful macro actions. The corresponding learning rule that is similar to intraoption Q-learning is employed to improve the data efficiency. Theoretical analysis is presented to show the convergence of the proposed learning rule. The efficiency of EASpace is illustrated by a grid-based game and a multiagent pursuit problem. The proposed algorithm is also implemented in physical systems to validate its effectiveness.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3322591","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Formulating expert policies as macro actions promises to alleviate the long-horizon issue via structured exploration and efficient credit assignment. However, traditional option-based multipolicy transfer methods suffer from inefficient exploration of macro action's length and insufficient exploitation of useful long-duration macro actions. In this article, a novel algorithm named enhanced action space (EASpace) is proposed, which formulates macro actions in an alternative form to accelerate the learning process using multiple available suboptimal expert policies. Specifically, EASpace formulates each expert policy into multiple macro actions with different execution times. All the macro actions are then integrated into the primitive action space directly. An intrinsic reward, which is proportional to the execution time of macro actions, is introduced to encourage the exploitation of useful macro actions. The corresponding learning rule that is similar to intraoption Q-learning is employed to improve the data efficiency. Theoretical analysis is presented to show the convergence of the proposed learning rule. The efficiency of EASpace is illustrated by a grid-based game and a multiagent pursuit problem. The proposed algorithm is also implemented in physical systems to validate its effectiveness.

EASpace:增强政策转移的行动空间。
将专家政策制定为宏观行动,有望通过结构化探索和有效的信贷分配来缓解长期问题。然而,传统的基于期权的多极转移方法对宏观行动的长度探索效率低下,对有用的长期宏观行动利用不足。在本文中,提出了一种名为增强行动空间(EASpace)的新算法,该算法以替代形式制定宏行动,以使用多个可用的次优专家策略来加速学习过程。具体来说,EASpace将每个专家策略制定为具有不同执行时间的多个宏操作。然后,所有宏动作都直接集成到原始动作空间中。引入了与宏观行动执行时间成比例的内在奖励,以鼓励利用有用的宏观行动。采用类似于口内Q学习的相应学习规则来提高数据效率。理论分析表明了所提出的学习规则的收敛性。EASpace的效率通过一个基于网格的游戏和一个多智能体追逐问题来说明。该算法也在物理系统中实现,以验证其有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信