When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.

IF 3.7 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Journal of Experimental Psychology: General Pub Date : 2025-02-01 Epub Date: 2024-12-05 DOI:10.1037/xge0001696
Fanny Lalot, Anna-Marie Bertram
{"title":"When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.","authors":"Fanny Lalot, Anna-Marie Bertram","doi":"10.1037/xge0001696","DOIUrl":null,"url":null,"abstract":"<p><p>The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (<i>N</i> = 313) and one main study (<i>N</i> = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":15698,"journal":{"name":"Journal of Experimental Psychology: General","volume":" ","pages":"533-551"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology: General","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xge0001696","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

当机器人说话时:调查人工智能(AI)聊天机器人的信任基础。
人工智能(AI)中的信任概念对于理解和塑造人类与人工智能系统的互动越来越重要。尽管文献越来越多,但关于人工智能的信任过程是否与人际信任(即人类同胞)相似,仍存在争议。本文的目的是双重的。首先,我们对人际信任研究启发的信任综合模型进行了系统测试,该模型包括信任、其前因(可信度和信任倾向)以及其后果(使用人工智能的意图和披露个人信息的意愿)。其次,我们研究了人工智能个性化对信任和可信度的作用,考虑了它们的平均水平和它们的动态关系。在两项针对AI聊天机器人的先导研究(N = 313)和一项主要研究(N = 1001)中,我们发现信任的整合模型适合于虚拟AI中的信任研究。人工智能的感知可信度,更具体地说,它的能力和完整性维度,是信任的重要前提,拟人化和信任智能技术的倾向也是如此。反过来,信任会导致更大的使用意图和向人工智能披露信息的意愿。个性化的人工智能聊天机器人被认为比没有人情味的聊天机器人更有能力、更仁慈。它也更加人格化,导致更大的使用意图,但不是更大的信任。拟人化,而不是信任,解释了使用个性化人工智能的更大意图。我们讨论了对人类信任和自动化研究的影响。(PsycInfo Database Record (c) 2024 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.20
自引率
4.90%
发文量
300
期刊介绍: The Journal of Experimental Psychology: General publishes articles describing empirical work that bridges the traditional interests of two or more communities of psychology. The work may touch on issues dealt with in JEP: Learning, Memory, and Cognition, JEP: Human Perception and Performance, JEP: Animal Behavior Processes, or JEP: Applied, but may also concern issues in other subdisciplines of psychology, including social processes, developmental processes, psychopathology, neuroscience, or computational modeling. Articles in JEP: General may be longer than the usual journal publication if necessary, but shorter articles that bridge subdisciplines will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信