The influence of mental state attributions on trust in large language models.

Clara Colombatto, Jonathan Birch, Stephen M Fleming
{"title":"The influence of mental state attributions on trust in large language models.","authors":"Clara Colombatto, Jonathan Birch, Stephen M Fleming","doi":"10.1038/s44271-025-00262-1","DOIUrl":null,"url":null,"abstract":"<p><p>Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.</p>","PeriodicalId":501698,"journal":{"name":"Communications Psychology","volume":"3 1","pages":"84"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104094/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications Psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1038/s44271-025-00262-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.

大型语言模型中心理状态归因对信任的影响。
人工智能(AI)的快速发展使用户相信大型语言模型(llm)等系统具有精神状态,包括“体验”的能力(例如情感和意识)。这些民间心理归因通常与专家意见不同,与“智能”的归因(例如,推理、规划)不同,但可能会影响对人工智能系统的信任。虽然过去的研究为拟人化和信任之间的联系提供了一些支持,但意识归因和心理其他方面对用户信任的影响仍不清楚。我们在一项预先注册的实验(N = 410)中对此进行了探讨,在该实验中,参与者对法学硕士表现出意识和各种其他精神状态的能力进行了评分。然后,他们完成了一项决策任务,他们可以根据法学硕士的建议修改自己的选择。贝叶斯分析揭示了强有力的证据,证明意识归因与建议采纳之间存在正相关关系;事实上,与经验相关的心理状态维度与接受建议呈负相关,而智力归因与接受建议密切相关。这些发现强调了用户的态度和行为是如何被对llms能力的复杂直觉所塑造的——心理状态归因的不同方面预测了人们对这些系统的信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信