CoSD:在无监督技能发现中平衡行为一致性和多样性。

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shuai Qing, Yi Sun, Kun Ding, Hui Zhang, Fei Zhu
{"title":"CoSD:在无监督技能发现中平衡行为一致性和多样性。","authors":"Shuai Qing, Yi Sun, Kun Ding, Hui Zhang, Fei Zhu","doi":"10.1016/j.neunet.2024.106889","DOIUrl":null,"url":null,"abstract":"<p><p>In hierarchical reinforcement learning, unsupervised skill discovery holds promise for overcoming the challenge of sparse rewards commonly encountered in traditional reinforcement learning. Although previous unsupervised skill discovery methods excelled at maximizing intrinsic rewards, they often overly prioritized skill diversity. Unrestrained pursuit of diversity leads skills to concentrate attention on unexplored domains, overlooking the internal consistency of skills themselves, resulting in the state visit distribution of individual skills lacking concentration. To address this problem, the Constrained Skill Discovery (CoSD) algorithm is proposed to balance the diversity and behavioral consistency of skills. CoSD integrates both the forward and the reverse decomposition forms of mutual information and uses the maximum entropy policy to maximize the information-theoretic objective of skill learning while requiring that each skill maintain low state entropy internally, which enhances the behavioral consistency of the skills while pursuing the diversity of the skills and ensures that the learned skills have a high degree of stability. Experimental results demonstrated that, compared with other skill discovery methods based on mutual information, skills from CoSD exhibited a more concentrated state visit distribution, indicating higher behavioral consistency and stability. In some complex downstream tasks, the skills with higher behavioral consistency exhibit superior performance.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"106889"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CoSD: Balancing behavioral consistency and diversity in unsupervised skill discovery.\",\"authors\":\"Shuai Qing, Yi Sun, Kun Ding, Hui Zhang, Fei Zhu\",\"doi\":\"10.1016/j.neunet.2024.106889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In hierarchical reinforcement learning, unsupervised skill discovery holds promise for overcoming the challenge of sparse rewards commonly encountered in traditional reinforcement learning. Although previous unsupervised skill discovery methods excelled at maximizing intrinsic rewards, they often overly prioritized skill diversity. Unrestrained pursuit of diversity leads skills to concentrate attention on unexplored domains, overlooking the internal consistency of skills themselves, resulting in the state visit distribution of individual skills lacking concentration. To address this problem, the Constrained Skill Discovery (CoSD) algorithm is proposed to balance the diversity and behavioral consistency of skills. CoSD integrates both the forward and the reverse decomposition forms of mutual information and uses the maximum entropy policy to maximize the information-theoretic objective of skill learning while requiring that each skill maintain low state entropy internally, which enhances the behavioral consistency of the skills while pursuing the diversity of the skills and ensures that the learned skills have a high degree of stability. Experimental results demonstrated that, compared with other skill discovery methods based on mutual information, skills from CoSD exhibited a more concentrated state visit distribution, indicating higher behavioral consistency and stability. In some complex downstream tasks, the skills with higher behavioral consistency exhibit superior performance.</p>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"182 \",\"pages\":\"106889\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.neunet.2024.106889\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106889","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在分层强化学习中,无监督技能发现有望克服传统强化学习中常见的奖励稀疏的难题。虽然以前的无监督技能发现方法在最大化内在奖励方面表现出色,但它们往往过于优先考虑技能的多样性。对多样性的无节制追求会导致技能将注意力集中在未探索的领域,而忽视技能本身的内在一致性,从而导致单个技能的状态访问分布缺乏集中性。为解决这一问题,我们提出了约束技能发现(CoSD)算法,以平衡技能的多样性和行为一致性。CoSD 融合了互信息的正向分解和反向分解两种形式,采用最大熵策略最大化技能学习的信息论目标,同时要求每个技能内部保持较低的状态熵,在追求技能多样性的同时增强了技能的行为一致性,确保学习到的技能具有较高的稳定性。实验结果表明,与其他基于互信息的技能发现方法相比,CoSD 的技能表现出更集中的状态访问分布,表明其具有更高的行为一致性和稳定性。在一些复杂的下游任务中,行为一致性较高的技能表现出更优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CoSD: Balancing behavioral consistency and diversity in unsupervised skill discovery.

In hierarchical reinforcement learning, unsupervised skill discovery holds promise for overcoming the challenge of sparse rewards commonly encountered in traditional reinforcement learning. Although previous unsupervised skill discovery methods excelled at maximizing intrinsic rewards, they often overly prioritized skill diversity. Unrestrained pursuit of diversity leads skills to concentrate attention on unexplored domains, overlooking the internal consistency of skills themselves, resulting in the state visit distribution of individual skills lacking concentration. To address this problem, the Constrained Skill Discovery (CoSD) algorithm is proposed to balance the diversity and behavioral consistency of skills. CoSD integrates both the forward and the reverse decomposition forms of mutual information and uses the maximum entropy policy to maximize the information-theoretic objective of skill learning while requiring that each skill maintain low state entropy internally, which enhances the behavioral consistency of the skills while pursuing the diversity of the skills and ensures that the learned skills have a high degree of stability. Experimental results demonstrated that, compared with other skill discovery methods based on mutual information, skills from CoSD exhibited a more concentrated state visit distribution, indicating higher behavioral consistency and stability. In some complex downstream tasks, the skills with higher behavioral consistency exhibit superior performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信