Would you Take Advice from a Robot? Developing a Framework for Inferring Human-Robot Trust in Time-Sensitive Scenarios

Jin Xu, A. Howard
{"title":"Would you Take Advice from a Robot? Developing a Framework for Inferring Human-Robot Trust in Time-Sensitive Scenarios","authors":"Jin Xu, A. Howard","doi":"10.1109/RO-MAN47096.2020.9223544","DOIUrl":null,"url":null,"abstract":"Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.
你会听取机器人的建议吗?时间敏感情景下人机信任推断框架的开发
信任是成功的人机交互的关键因素。该领域的一个具有挑战性的问题是如何构建一个最优模拟这种信任现象的公式。本文提出了一种基于将人的决策过程表示为基于信任状态的公式的人机信任建模框架。利用这一公式,我们讨论了基于隐马尔可夫模型和逻辑回归的广义人机信任模型。所提出的方法在从两个不同的人类受试者研究中收集的数据集上进行了验证,在这些研究中,人类被提供了从机器人那里听取建议的能力。这两个实验场景都是时间敏感的,因为人类必须在有限的时间内做出决定,但每个场景都有不同程度的认知负荷。实验结果表明,所提出的公式可以用于建立信任模型,其中系统可以预测人类是否会决定接受机器人的建议。结果发现,在机器人犯错后,我们的预测性能会下降。该方法在两个场景上的验证表明,只要交互动力学符合所提出的公式,该模型就可以应用于其他交互场景。讨论了未来改进的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信