Who Should I Trust: Human-AI Trust Model in AI Assisted Decision-Making

Yechen Yang
{"title":"Who Should I Trust: Human-AI Trust Model in AI Assisted Decision-Making","authors":"Yechen Yang","doi":"10.54254/2753-7048/41/20240805","DOIUrl":null,"url":null,"abstract":"AI technology, relying on its extraordinary data searching and calculating capability, has been widely applied in assisting human decision-makers in various industries: healthcare, business management, public policies, etc. As a crucial factor influencing the performance of human and AI interaction, trust has come to be valued more in the research area in recent years. Previous studies have emphasized multiple factors that have significant impacts on the trust between human decision-makers and AI assistants. Yet, more attention needs to be paid to building up a systematic model for trust in the human-AI collaboration context. Therefore, to construct a systematic model of trust for the AI decision-making area, this paper reviews the recently conducted research, analyzes and synthesizes the significant factors of trust in the AI-assisted decision-making process and establishes a theoretical ternary interaction model from three major aspects: human decision-maker-related, AI-related, and scenario-related. Factors from the three aspects construct the three major elements of trust, which can eventually evaluate trust in the assisted decision-making process. This systematic trust model fills the theoretical gap in the current studies of trust in human-AI interaction and provides implications for further research studies in studying AI trust-related topics.","PeriodicalId":474531,"journal":{"name":"Lecture Notes in Education Psychology and Public Media","volume":"45 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lecture Notes in Education Psychology and Public Media","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.54254/2753-7048/41/20240805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

AI technology, relying on its extraordinary data searching and calculating capability, has been widely applied in assisting human decision-makers in various industries: healthcare, business management, public policies, etc. As a crucial factor influencing the performance of human and AI interaction, trust has come to be valued more in the research area in recent years. Previous studies have emphasized multiple factors that have significant impacts on the trust between human decision-makers and AI assistants. Yet, more attention needs to be paid to building up a systematic model for trust in the human-AI collaboration context. Therefore, to construct a systematic model of trust for the AI decision-making area, this paper reviews the recently conducted research, analyzes and synthesizes the significant factors of trust in the AI-assisted decision-making process and establishes a theoretical ternary interaction model from three major aspects: human decision-maker-related, AI-related, and scenario-related. Factors from the three aspects construct the three major elements of trust, which can eventually evaluate trust in the assisted decision-making process. This systematic trust model fills the theoretical gap in the current studies of trust in human-AI interaction and provides implications for further research studies in studying AI trust-related topics.
我应该相信谁:人工智能辅助决策中的人-人工智能信任模型
人工智能技术凭借其超强的数据搜索和计算能力,已被广泛应用于医疗保健、商业管理、公共政策等各行各业,辅助人类决策者。作为影响人类与人工智能交互性能的关键因素,信任近年来在研究领域越来越受到重视。以往的研究强调了对人类决策者和人工智能助手之间的信任产生重大影响的多种因素。然而,在人类与人工智能协作的背景下,建立一个系统的信任模型还需要更多的关注。因此,为了构建人工智能决策领域的系统信任模型,本文回顾了近年来的研究,分析归纳了人工智能辅助决策过程中的重要信任因素,并从与人类决策者相关、与人工智能相关、与场景相关三大方面建立了三元交互理论模型。三个方面的因素构建了信任的三大要素,最终可以对辅助决策过程中的信任进行评估。这一系统化的信任模型填补了目前人机交互信任研究的理论空白,为进一步研究人工智能信任相关课题提供了借鉴。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信