Gengzhi Zhang;Liang Feng;Yu Wang;Min Li;Hong Xie;Kay Chen Tan
{"title":"Reinforcement Learning With Adaptive Policy Gradient Transfer Across Heterogeneous Problems","authors":"Gengzhi Zhang;Liang Feng;Yu Wang;Min Li;Hong Xie;Kay Chen Tan","doi":"10.1109/TETCI.2024.3361860","DOIUrl":null,"url":null,"abstract":"To date, transfer learning (TL) has been successfully applied for enhancing the learning performance of reinforcement learning (RL), and many transfer RL (TRL) approaches have been proposed in the literature. However, most of the existing TRL approaches consider knowledge transfer between RL tasks sharing the same state-action space. These methods thus may fail in cases where the RL tasks available for conducting knowledge transfer possess heterogeneous state-action spaces, which is common in many real-world applications. TRL across heterogeneous problem domains is challenging since the differences lie in the state-action spaces of the RL tasks are natural barriers in the knowledge transfer across tasks. This becomes more difficult if multiple heterogeneous source tasks are available when conducting knowledge transfer for a target RL task, as we have to identify the appropriate source task adaptively before performing knowledge transfer towards enhanced RL performance. In this article, we propose a new TRL algorithm with adaptive policy gradient transfer for the cases having multiple heterogeneous source RL tasks. The core ingredients of the proposed algorithm contain a \n<italic>source task selection module</i>\n to select an appropriate task from a set of heterogeneous source tasks and a \n<italic>knowledge transfer module</i>\n for conducting knowledge transfer across heterogeneous RL tasks. To investigate the performance of the proposed algorithm, we have conducted comprehensive empirical studies based on the well-known continuous robotic RL task with heterogeneous settings in the number of robot arms (links). The obtained results show that the proposed algorithm is effective and efficient in conducting knowledge transfer across heterogeneous problems for enhanced RL performance, over both the RL algorithm having no knowledge transfer in the learning process and the existing state-of-the-art TRL method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10444921/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
To date, transfer learning (TL) has been successfully applied for enhancing the learning performance of reinforcement learning (RL), and many transfer RL (TRL) approaches have been proposed in the literature. However, most of the existing TRL approaches consider knowledge transfer between RL tasks sharing the same state-action space. These methods thus may fail in cases where the RL tasks available for conducting knowledge transfer possess heterogeneous state-action spaces, which is common in many real-world applications. TRL across heterogeneous problem domains is challenging since the differences lie in the state-action spaces of the RL tasks are natural barriers in the knowledge transfer across tasks. This becomes more difficult if multiple heterogeneous source tasks are available when conducting knowledge transfer for a target RL task, as we have to identify the appropriate source task adaptively before performing knowledge transfer towards enhanced RL performance. In this article, we propose a new TRL algorithm with adaptive policy gradient transfer for the cases having multiple heterogeneous source RL tasks. The core ingredients of the proposed algorithm contain a
source task selection module
to select an appropriate task from a set of heterogeneous source tasks and a
knowledge transfer module
for conducting knowledge transfer across heterogeneous RL tasks. To investigate the performance of the proposed algorithm, we have conducted comprehensive empirical studies based on the well-known continuous robotic RL task with heterogeneous settings in the number of robot arms (links). The obtained results show that the proposed algorithm is effective and efficient in conducting knowledge transfer across heterogeneous problems for enhanced RL performance, over both the RL algorithm having no knowledge transfer in the learning process and the existing state-of-the-art TRL method.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.