{"title":"深度强化学习的全局动态动作持久性适应","authors":"Junbo Tong, Daming Shi, Yi Liu, Wenhui Fan","doi":"https://dl.acm.org/doi/10.1145/3590154","DOIUrl":null,"url":null,"abstract":"<p>In the implementation of deep reinforcement learning (DRL), action persistence strategies are often adopted so agents maintain their actions for a fixed or variable number of steps. The choice of the persistent duration for agent actions usually has notable effects on the performance of reinforcement learning algorithms. Aiming at the research gap of global dynamic optimal action persistence and its application in multi-agent systems, we propose a novel framework: global dynamic action persistence (GLDAP), which achieves global action persistence adaptation for deep reinforcement learning. We introduce a closed-loop method that is used to learn the estimated value and the corresponding policy of each candidate action persistence. Our experiment shows that GLDAP achieves an average of 2.5%~90.7% performance improvement and 3~20 times higher sampling efficiency over several baselines across various single-agent and multi-agent domains. We also validate the ability of GLDAP to determine the optimal action persistence through multiple experiments.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"7 2","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GLDAP: Global Dynamic Action Persistence Adaptation for Deep Reinforcement Learning\",\"authors\":\"Junbo Tong, Daming Shi, Yi Liu, Wenhui Fan\",\"doi\":\"https://dl.acm.org/doi/10.1145/3590154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the implementation of deep reinforcement learning (DRL), action persistence strategies are often adopted so agents maintain their actions for a fixed or variable number of steps. The choice of the persistent duration for agent actions usually has notable effects on the performance of reinforcement learning algorithms. Aiming at the research gap of global dynamic optimal action persistence and its application in multi-agent systems, we propose a novel framework: global dynamic action persistence (GLDAP), which achieves global action persistence adaptation for deep reinforcement learning. We introduce a closed-loop method that is used to learn the estimated value and the corresponding policy of each candidate action persistence. Our experiment shows that GLDAP achieves an average of 2.5%~90.7% performance improvement and 3~20 times higher sampling efficiency over several baselines across various single-agent and multi-agent domains. We also validate the ability of GLDAP to determine the optimal action persistence through multiple experiments.</p>\",\"PeriodicalId\":50919,\"journal\":{\"name\":\"ACM Transactions on Autonomous and Adaptive Systems\",\"volume\":\"7 2\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2023-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Autonomous and Adaptive Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/https://dl.acm.org/doi/10.1145/3590154\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Autonomous and Adaptive Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3590154","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
GLDAP: Global Dynamic Action Persistence Adaptation for Deep Reinforcement Learning
In the implementation of deep reinforcement learning (DRL), action persistence strategies are often adopted so agents maintain their actions for a fixed or variable number of steps. The choice of the persistent duration for agent actions usually has notable effects on the performance of reinforcement learning algorithms. Aiming at the research gap of global dynamic optimal action persistence and its application in multi-agent systems, we propose a novel framework: global dynamic action persistence (GLDAP), which achieves global action persistence adaptation for deep reinforcement learning. We introduce a closed-loop method that is used to learn the estimated value and the corresponding policy of each candidate action persistence. Our experiment shows that GLDAP achieves an average of 2.5%~90.7% performance improvement and 3~20 times higher sampling efficiency over several baselines across various single-agent and multi-agent domains. We also validate the ability of GLDAP to determine the optimal action persistence through multiple experiments.
期刊介绍:
TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community -- and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors.
TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community - and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors. Contributions are expected to be based on sound and innovative theoretical models, algorithms, engineering and programming techniques, infrastructures and systems, or technological and application experiences.