解释机器人政策

Applied AI letters Pub Date : 2021-11-13 DOI:10.1002/ail2.52
Olivia Watkins, Sandy Huang, Julius Frost, Kush Bhatia, Eric Weiner, Pieter Abbeel, Trevor Darrell, Bryan Plummer, Kate Saenko, Anca Dragan
{"title":"解释机器人政策","authors":"Olivia Watkins,&nbsp;Sandy Huang,&nbsp;Julius Frost,&nbsp;Kush Bhatia,&nbsp;Eric Weiner,&nbsp;Pieter Abbeel,&nbsp;Trevor Darrell,&nbsp;Bryan Plummer,&nbsp;Kate Saenko,&nbsp;Anca Dragan","doi":"10.1002/ail2.52","DOIUrl":null,"url":null,"abstract":"<p>In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown <i>critical states</i> where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show <i>exploratory states</i> that the robot does not visit naturally.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.52","citationCount":"4","resultStr":"{\"title\":\"Explaining robot policies\",\"authors\":\"Olivia Watkins,&nbsp;Sandy Huang,&nbsp;Julius Frost,&nbsp;Kush Bhatia,&nbsp;Eric Weiner,&nbsp;Pieter Abbeel,&nbsp;Trevor Darrell,&nbsp;Bryan Plummer,&nbsp;Kate Saenko,&nbsp;Anca Dragan\",\"doi\":\"10.1002/ail2.52\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown <i>critical states</i> where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show <i>exploratory states</i> that the robot does not visit naturally.</p>\",\"PeriodicalId\":72253,\"journal\":{\"name\":\"Applied AI letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.52\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied AI letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ail2.52\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.52","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

为了与机器人互动或做出明智的决定,决定在现实世界中部署机器人的位置和方式,人类需要对机器人在不同情况下的行为有一个准确的心理模型。我们建议通过向用户展示机器人在信息场景中的行为来改善用户对机器人的心理模型。我们在两种情况下对此进行探讨。首先,我们表明,当有许多可能的环境状态时,如果用户看到采取特定行动很重要的关键状态,他们可以更快地理解机器人的策略。其次,我们证明了当训练环境和测试环境之间的分布发生变化时,显示机器人不自然访问的探索状态更有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Explaining robot policies

Explaining robot policies

In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown critical states where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show exploratory states that the robot does not visit naturally.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信