{"title":"Assessing artificial trust in human-agent teams: a conceptual model","authors":"Carolina Centeio Jorge, M. Tielman, C. Jonker","doi":"10.1145/3514197.3549696","DOIUrl":null,"url":null,"abstract":"As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3514197.3549696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study.