Case-Based Inverse Reinforcement Learning Using Temporal Coherence

Jonas Nüßlein, Steffen Illium, Robert Müller, Thomas Gabor, Claudia Linnhoff-Popien
{"title":"Case-Based Inverse Reinforcement Learning Using Temporal Coherence","authors":"Jonas Nüßlein, Steffen Illium, Robert Müller, Thomas Gabor, Claudia Linnhoff-Popien","doi":"10.48550/arXiv.2206.05827","DOIUrl":null,"url":null,"abstract":"Providing expert trajectories in the context of Imitation Learning is often expensive and time-consuming. The goal must therefore be to create algorithms which require as little expert data as possible. In this paper we present an algorithm that imitates the higher-level strategy of the expert rather than just imitating the expert on action level, which we hypothesize requires less expert data and makes training more stable. As a prior, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning. The target state area is unknown, but since the expert has demonstrated how to reach it, the agent tries to reach states similar to the expert. Building on the idea of Temporal Coherence, our algorithm trains a neural network to predict whether two states are similar, in the sense that they may occur close in time. During inference, the agent compares its current state with expert states from a Case Base for similarity. The results show that our approach can still learn a near-optimal policy in settings with very little expert data, where algorithms that try to imitate the expert at the action level can no longer do so.","PeriodicalId":239937,"journal":{"name":"International Conference on Case-Based Reasoning","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Case-Based Reasoning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2206.05827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Providing expert trajectories in the context of Imitation Learning is often expensive and time-consuming. The goal must therefore be to create algorithms which require as little expert data as possible. In this paper we present an algorithm that imitates the higher-level strategy of the expert rather than just imitating the expert on action level, which we hypothesize requires less expert data and makes training more stable. As a prior, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning. The target state area is unknown, but since the expert has demonstrated how to reach it, the agent tries to reach states similar to the expert. Building on the idea of Temporal Coherence, our algorithm trains a neural network to predict whether two states are similar, in the sense that they may occur close in time. During inference, the agent compares its current state with expert states from a Case Base for similarity. The results show that our approach can still learn a near-optimal policy in settings with very little expert data, where algorithms that try to imitate the expert at the action level can no longer do so.
使用时间一致性的基于案例的逆强化学习
在模仿学习的背景下提供专家轨迹通常是昂贵和耗时的。因此,目标必须是创建需要尽可能少的专家数据的算法。在本文中,我们提出了一种算法,它模仿专家的高级策略,而不仅仅是在行动层面上模仿专家,我们假设这种算法需要较少的专家数据,使训练更稳定。作为先验,我们假设更高级别的策略是到达一个未知的目标状态区域,我们假设这是强化学习中许多领域的有效先验。目标状态区域是未知的,但由于专家已经演示了如何到达它,代理尝试到达与专家相似的状态。基于时间相干的思想,我们的算法训练一个神经网络来预测两个状态是否相似,也就是说它们可能在时间上接近。在推理过程中,智能体将其当前状态与来自Case Base的专家状态进行相似性比较。结果表明,在专家数据很少的情况下,我们的方法仍然可以学习到接近最优的策略,而那些试图在行动层面模仿专家的算法已经不能再这样做了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信