Learning Causally Invariant Reward Functions from Diverse Demonstrations

Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann
{"title":"Learning Causally Invariant Reward Functions from Diverse Demonstrations","authors":"Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann","doi":"arxiv-2409.08012","DOIUrl":null,"url":null,"abstract":"Inverse reinforcement learning methods aim to retrieve the reward function of\na Markov decision process based on a dataset of expert demonstrations. The\ncommonplace scarcity and heterogeneous sources of such demonstrations can lead\nto the absorption of spurious correlations in the data by the learned reward\nfunction. Consequently, this adaptation often exhibits behavioural overfitting\nto the expert data set when a policy is trained on the obtained reward function\nunder distribution shift of the environment dynamics. In this work, we explore\na novel regularization approach for inverse reinforcement learning methods\nbased on the causal invariance principle with the goal of improved reward\nfunction generalization. By applying this regularization to both exact and\napproximate formulations of the learning task, we demonstrate superior policy\nperformance when trained using the recovered reward functions in a transfer\nsetting","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations. The commonplace scarcity and heterogeneous sources of such demonstrations can lead to the absorption of spurious correlations in the data by the learned reward function. Consequently, this adaptation often exhibits behavioural overfitting to the expert data set when a policy is trained on the obtained reward function under distribution shift of the environment dynamics. In this work, we explore a novel regularization approach for inverse reinforcement learning methods based on the causal invariance principle with the goal of improved reward function generalization. By applying this regularization to both exact and approximate formulations of the learning task, we demonstrate superior policy performance when trained using the recovered reward functions in a transfer setting
从多样化演示中学习因果不变的奖励函数
反强化学习方法旨在根据专家示范数据集检索马尔可夫决策过程的奖励函数。这种示范的普遍稀缺性和异质性来源会导致学习到的奖励函数吸收数据中的虚假相关性。因此,在环境动态分布变化的情况下,根据获得的奖励函数训练策略时,这种适应往往会表现出对专家数据集的行为过拟合。在这项工作中,我们探索了一种基于因果不变性原理的反强化学习方法的新型正则化方法,目的是改进奖励函数的泛化。通过将这种正则化方法应用于学习任务的精确表述和近似表述,我们证明了在转移设置中使用恢复的奖励函数进行训练时,反强化学习方法具有卓越的策略性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信