Trusting under risk – comparing human to AI decision support agents

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Hannah Fahnenstich , Tobias Rieger , Eileen Roesler
{"title":"Trusting under risk – comparing human to AI decision support agents","authors":"Hannah Fahnenstich ,&nbsp;Tobias Rieger ,&nbsp;Eileen Roesler","doi":"10.1016/j.chb.2023.108107","DOIUrl":null,"url":null,"abstract":"<div><p>The growing number of safety-critical technologized workplaces leads to enhanced support of complex human decision-making by artificial intelligence (AI), increasing the relevance of risk in the joint decision process. This online study examined participants' trust, attitude and behavior during a visual estimation task supported by either a human or an AI decision support agent. Throughout the online studyrisk levels were manipulated through different scenarios. Contrary to recent literature, no main effects were found in participants' trust attitude or trust behavior between support agent conditions or risk levels. However, participants using AI support exhibited increased trust behavior under higher risk, while participants with human support agents did not display behavioral differences. Self-confidence vs. trust and an increased feeling of responsibility may be possible reasons. Furthermore, participants reported the human support agent to be more responsible for possible negative outcomes of the joint task than the AI support agent. Hereby, risk did not influence perceived responsibility. However, the study's findings concerning trust behavior underscore the crucial importance of investigating the impact of risk in workplaces, shedding light on the under-researched effect of risk on trust attitude and behavior in AI-supported human decision-making.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"153 ","pages":"Article 108107"},"PeriodicalIF":9.0000,"publicationDate":"2023-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563223004582","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

The growing number of safety-critical technologized workplaces leads to enhanced support of complex human decision-making by artificial intelligence (AI), increasing the relevance of risk in the joint decision process. This online study examined participants' trust, attitude and behavior during a visual estimation task supported by either a human or an AI decision support agent. Throughout the online studyrisk levels were manipulated through different scenarios. Contrary to recent literature, no main effects were found in participants' trust attitude or trust behavior between support agent conditions or risk levels. However, participants using AI support exhibited increased trust behavior under higher risk, while participants with human support agents did not display behavioral differences. Self-confidence vs. trust and an increased feeling of responsibility may be possible reasons. Furthermore, participants reported the human support agent to be more responsible for possible negative outcomes of the joint task than the AI support agent. Hereby, risk did not influence perceived responsibility. However, the study's findings concerning trust behavior underscore the crucial importance of investigating the impact of risk in workplaces, shedding light on the under-researched effect of risk on trust attitude and behavior in AI-supported human decision-making.

风险下的信任--人类与人工智能决策支持代理的比较
对安全至关重要的技术化工作场所越来越多,导致人工智能(AI)对人类复杂决策的支持增强,从而提高了风险在联合决策过程中的相关性。这项在线研究考察了在人类或人工智能决策支持代理的支持下,参与者在视觉估算任务中的信任态度和行为。与最近的文献相反,在支持代理条件或风险水平之间,参与者的信任态度或信任行为没有发现主效应。然而,使用人工智能支持的参与者在较高风险下表现出更高的信任行为,而使用人类支持代理的参与者则没有表现出行为差异。自信与信任以及责任感的增强可能是其中的原因。此外,与人工智能支持代理相比,参与者认为人类支持代理对联合任务中可能出现的负面结果负有更大责任。风险并不影响责任感。然而,本研究关于信任行为的发现强调了调查工作场所风险影响的重要性,揭示了在人工智能支持的人类决策中,风险对信任态度和行为的影响研究不足。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信