Assessing artificial trust in human-agent teams: a conceptual model

Carolina Centeio Jorge, M. Tielman, C. Jonker
{"title":"Assessing artificial trust in human-agent teams: a conceptual model","authors":"Carolina Centeio Jorge, M. Tielman, C. Jonker","doi":"10.1145/3514197.3549696","DOIUrl":null,"url":null,"abstract":"As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3514197.3549696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study.
评估人类代理团队中的人工信任:一个概念模型
随着智能体逐渐成为人类的队友,人类不仅需要信任智能体,智能体也应该能够形成人工信任,即对人类可信赖性的信念。我们将人工信任视为对能力和意愿的信念,并研究在评估人工信任时,人类的哪些内部因素(krypta)可能发挥作用。此外,我们研究了哪些可观察的措施(宣言)可以作为人类队友氪星的线索。提出了一种针对特定任务的人工信任概念模型。我们的模型基于文献和初步的用户研究,提出了与人类可信度(能力、仁慈、正直)和战略(感知成本和收益)相关的可观察指标,作为意愿和能力的预测因子。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信