Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann
{"title":"Learning Causally Invariant Reward Functions from Diverse Demonstrations","authors":"Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann","doi":"arxiv-2409.08012","DOIUrl":null,"url":null,"abstract":"Inverse reinforcement learning methods aim to retrieve the reward function of\na Markov decision process based on a dataset of expert demonstrations. The\ncommonplace scarcity and heterogeneous sources of such demonstrations can lead\nto the absorption of spurious correlations in the data by the learned reward\nfunction. Consequently, this adaptation often exhibits behavioural overfitting\nto the expert data set when a policy is trained on the obtained reward function\nunder distribution shift of the environment dynamics. In this work, we explore\na novel regularization approach for inverse reinforcement learning methods\nbased on the causal invariance principle with the goal of improved reward\nfunction generalization. By applying this regularization to both exact and\napproximate formulations of the learning task, we demonstrate superior policy\nperformance when trained using the recovered reward functions in a transfer\nsetting","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Inverse reinforcement learning methods aim to retrieve the reward function of
a Markov decision process based on a dataset of expert demonstrations. The
commonplace scarcity and heterogeneous sources of such demonstrations can lead
to the absorption of spurious correlations in the data by the learned reward
function. Consequently, this adaptation often exhibits behavioural overfitting
to the expert data set when a policy is trained on the obtained reward function
under distribution shift of the environment dynamics. In this work, we explore
a novel regularization approach for inverse reinforcement learning methods
based on the causal invariance principle with the goal of improved reward
function generalization. By applying this regularization to both exact and
approximate formulations of the learning task, we demonstrate superior policy
performance when trained using the recovered reward functions in a transfer
setting