Clara Colombatto, Jonathan Birch, Stephen M Fleming
{"title":"The influence of mental state attributions on trust in large language models.","authors":"Clara Colombatto, Jonathan Birch, Stephen M Fleming","doi":"10.1038/s44271-025-00262-1","DOIUrl":null,"url":null,"abstract":"<p><p>Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.</p>","PeriodicalId":501698,"journal":{"name":"Communications Psychology","volume":"3 1","pages":"84"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104094/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications Psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1038/s44271-025-00262-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.