Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring

Merlijn Krale, T. D. Simão, N. Jansen
{"title":"Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring","authors":"Merlijn Krale, T. D. Simão, N. Jansen","doi":"10.48550/arXiv.2303.08271","DOIUrl":null,"url":null,"abstract":"We study Markov decision processes (MDPs), where agents control when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions have two components: a control action that influences how the environment changes and a measurement action that affects the agent's observation. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. To decide whether or not to measure, we introduce the concept of measuring value. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss it incurs. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2303.08271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We study Markov decision processes (MDPs), where agents control when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions have two components: a control action that influences how the environment changes and a measurement action that affects the agent's observation. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. To decide whether or not to measure, we introduce the concept of measuring value. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss it incurs. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.
行动-测量:基于主动测量的部分可观察环境的强化学习
我们研究马尔可夫决策过程(mdp),其中代理控制何时以及如何收集信息,由行动偶然的无噪音可观察mdp (ACNO-MPDs)形式化。在这些模型中,行为有两个组成部分:影响环境如何变化的控制行为和影响代理观察的测量行为。为了解决ACNO-MDPs问题,我们引入了行动-测量(ATM)启发式算法,该算法假设我们在选择控制动作时可以忽略未来状态的不确定性。为了决定是否测量,我们引入了测量值的概念。我们将展示遵循这种启发式方法如何缩短策略计算时间,并证明它所导致的性能损失的限度。我们开发了一种基于ATM启发式的强化学习算法,使用适用于部分可观察域的Dyna-Q变体,并在许多部分可观察环境中展示了与先前方法相比其优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信