Causal representation learning in offline visual reinforcement learning

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yaru Zhang, Kaizhou Chen, Yunlong Liu
{"title":"Causal representation learning in offline visual reinforcement learning","authors":"Yaru Zhang,&nbsp;Kaizhou Chen,&nbsp;Yunlong Liu","doi":"10.1016/j.knosys.2025.113565","DOIUrl":null,"url":null,"abstract":"<div><div>Real-world reinforcement learning (RL) applications contend with high-dimensional visual observations contaminated by confounding factors, which induce spurious correlations and obscure decision-relevant information. Compounding this issue, the inability to interact online necessitates reliance on pre-collected datasets, thereby hampering a deeper understanding of complex environment structures. In this work, by focusing on the causal rather than spurious correlations in the input and explicitly distinguishing between task-related and task-irrelevant elements of the causal variables, we propose a mask-based algorithm for learning task-related minimal causal state representations, namely MMCS. Specifically, MMCS guides the decoupling of minimal causal variables through mask network partitioning and jointly enforcing conditional independence and causal sufficiency, thereby eliminating unnecessary dependencies between variables and uncovering causal dependency structures. More importantly, MMCS is decoupled from downstream policy learning, and can function as a plug-in method compatible with any offline reinforcement learning algorithm. Empirical results on the Visual-D4RL benchmark demonstrate that MMCS significantly improves performance and sample efficiency in downstream policy learning. In addition, its robust performance in various distraction environments highlights the potential of MMCS to improve the generalizability of offline RL, especially under conditions of limited data and visual distractions. Code is available at <span><span>https://github.com/DMU-XMU/MMCS.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"320 ","pages":"Article 113565"},"PeriodicalIF":7.2000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125006112","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Real-world reinforcement learning (RL) applications contend with high-dimensional visual observations contaminated by confounding factors, which induce spurious correlations and obscure decision-relevant information. Compounding this issue, the inability to interact online necessitates reliance on pre-collected datasets, thereby hampering a deeper understanding of complex environment structures. In this work, by focusing on the causal rather than spurious correlations in the input and explicitly distinguishing between task-related and task-irrelevant elements of the causal variables, we propose a mask-based algorithm for learning task-related minimal causal state representations, namely MMCS. Specifically, MMCS guides the decoupling of minimal causal variables through mask network partitioning and jointly enforcing conditional independence and causal sufficiency, thereby eliminating unnecessary dependencies between variables and uncovering causal dependency structures. More importantly, MMCS is decoupled from downstream policy learning, and can function as a plug-in method compatible with any offline reinforcement learning algorithm. Empirical results on the Visual-D4RL benchmark demonstrate that MMCS significantly improves performance and sample efficiency in downstream policy learning. In addition, its robust performance in various distraction environments highlights the potential of MMCS to improve the generalizability of offline RL, especially under conditions of limited data and visual distractions. Code is available at https://github.com/DMU-XMU/MMCS.git.
离线视觉强化学习中的因果表示学习
现实世界的强化学习(RL)应用与受混杂因素污染的高维视觉观察相斗争,这些混杂因素会导致虚假的相关性和模糊的决策相关信息。使这个问题更加复杂的是,无法在线交互需要依赖预先收集的数据集,从而阻碍了对复杂环境结构的更深入理解。在这项工作中,通过关注输入中的因果关系而不是虚假相关性,并明确区分因果变量的任务相关和任务无关元素,我们提出了一种基于掩模的算法来学习任务相关的最小因果状态表示,即MMCS。具体而言,MMCS通过掩模网络划分引导最小因果变量的解耦,并联合实施条件独立性和因果充分性,从而消除变量之间不必要的依赖关系,揭示因果依赖结构。更重要的是,MMCS与下游策略学习解耦,可以作为与任何离线强化学习算法兼容的插件方法。在Visual-D4RL基准上的实证结果表明,MMCS显著提高了下游策略学习的性能和样本效率。此外,MMCS在各种分心环境下的鲁棒性突出了其提高离线RL泛化能力的潜力,特别是在数据有限和视觉分心的条件下。代码可从https://github.com/DMU-XMU/MMCS.git获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信