{"title":"A Survey of Inverse Constrained Reinforcement Learning: Definitions, Progress and Challenges","authors":"Guiliang Liu, Sheng Xu, Shicheng Liu, Ashish Gaurav, Sriram Ganapathi Subramanian, Pascal Poupart","doi":"arxiv-2409.07569","DOIUrl":null,"url":null,"abstract":"Inverse Constrained Reinforcement Learning (ICRL) is the task of inferring\nthe implicit constraints followed by expert agents from their demonstration\ndata. As an emerging research topic, ICRL has received considerable attention\nin recent years. This article presents a categorical survey of the latest\nadvances in ICRL. It serves as a comprehensive reference for machine learning\nresearchers and practitioners, as well as starters seeking to comprehend the\ndefinitions, advancements, and important challenges in ICRL. We begin by\nformally defining the problem and outlining the algorithmic framework that\nfacilitates constraint inference across various scenarios. These include\ndeterministic or stochastic environments, environments with limited\ndemonstrations, and multiple agents. For each context, we illustrate the\ncritical challenges and introduce a series of fundamental methods to tackle\nthese issues. This survey encompasses discrete, virtual, and realistic\nenvironments for evaluating ICRL agents. We also delve into the most pertinent\napplications of ICRL, such as autonomous driving, robot control, and sports\nanalytics. To stimulate continuing research, we conclude the survey with a\ndiscussion of key unresolved questions in ICRL that can effectively foster a\nbridge between theoretical understanding and practical industrial applications.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07569","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Inverse Constrained Reinforcement Learning (ICRL) is the task of inferring
the implicit constraints followed by expert agents from their demonstration
data. As an emerging research topic, ICRL has received considerable attention
in recent years. This article presents a categorical survey of the latest
advances in ICRL. It serves as a comprehensive reference for machine learning
researchers and practitioners, as well as starters seeking to comprehend the
definitions, advancements, and important challenges in ICRL. We begin by
formally defining the problem and outlining the algorithmic framework that
facilitates constraint inference across various scenarios. These include
deterministic or stochastic environments, environments with limited
demonstrations, and multiple agents. For each context, we illustrate the
critical challenges and introduce a series of fundamental methods to tackle
these issues. This survey encompasses discrete, virtual, and realistic
environments for evaluating ICRL agents. We also delve into the most pertinent
applications of ICRL, such as autonomous driving, robot control, and sports
analytics. To stimulate continuing research, we conclude the survey with a
discussion of key unresolved questions in ICRL that can effectively foster a
bridge between theoretical understanding and practical industrial applications.