人工智能引发的冷漠:不公平的人工智能会降低亲社会性

IF 2.8 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Raina Zexuan Zhang , Ellie J. Kyung , Chiara Longoni , Luca Cian , Kellen Mrkva
{"title":"人工智能引发的冷漠:不公平的人工智能会降低亲社会性","authors":"Raina Zexuan Zhang ,&nbsp;Ellie J. Kyung ,&nbsp;Chiara Longoni ,&nbsp;Luca Cian ,&nbsp;Kellen Mrkva","doi":"10.1016/j.cognition.2024.105937","DOIUrl":null,"url":null,"abstract":"<div><div>The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined <em>N</em> = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term <em>AI-induced indifference</em>. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.</div><div>All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: <span><span>https://bit.ly/OSF_unfairAI</span><svg><path></path></svg></span></div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105937"},"PeriodicalIF":2.8000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI-induced indifference: Unfair AI reduces prosociality\",\"authors\":\"Raina Zexuan Zhang ,&nbsp;Ellie J. Kyung ,&nbsp;Chiara Longoni ,&nbsp;Luca Cian ,&nbsp;Kellen Mrkva\",\"doi\":\"10.1016/j.cognition.2024.105937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined <em>N</em> = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term <em>AI-induced indifference</em>. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.</div><div>All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: <span><span>https://bit.ly/OSF_unfairAI</span><svg><path></path></svg></span></div></div>\",\"PeriodicalId\":48455,\"journal\":{\"name\":\"Cognition\",\"volume\":\"254 \",\"pages\":\"Article 105937\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognition\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0010027724002233\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027724002233","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)在我们的生活中日益普及,这使得基于人工智能的决策对人类判断的影响成为学术界和公众讨论的焦点。尽管有关人们对人工智能接受程度的研究不断增加,但人们对与人工智能的互动如何影响人与人之间的后续互动却知之甚少。我们以人工智能与人类做出的不公平决定为背景,探讨了这一问题,并重点研究了体验此类决定对亲社会行为倾向的溢出效应。四项实验(总人数 = 2425)表明,人工智能(相对于人类)行为者在做出不公平分配后,会导致在后续决策中对其他人类的亲社会行为发生率降低--我们将这种效应称为人工智能诱发的冷漠。在实验 1 中,在接受了人工智能(相对于人类)行为者的不公平货币分配后,人们不太可能采取亲社会行为,即在随后的无关决策中以个人代价惩罚不公平的人类行为者。实验 2a 和 2b 为基本机制提供了证据:与人类行为者相比,人们对人工智能行为者不公平行为的指责更少,从而降低了人们随后通过惩罚不公平行为者来制裁不公平行为的愿望。在一个激励兼容的设计中,实验 3 表明,即使最初的不公平决定和随后的互动发生在不同的情境中,人工智能诱发的冷漠也会表现出来。这些发现说明了人与人工智能互动对人与人互动的溢出效应,并表明与不公平的人工智能互动可能会使人们对他人的不良行为失去敏感性,从而降低他们采取亲社会行为的可能性。所有预注册、数据、代码、统计输出、刺激qsf文件和补充附录都发布在OSF上:https://bit.ly/OSF_unfairAI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI-induced indifference: Unfair AI reduces prosociality
The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined N = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term AI-induced indifference. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.
All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: https://bit.ly/OSF_unfairAI
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognition
Cognition PSYCHOLOGY, EXPERIMENTAL-
CiteScore
6.40
自引率
5.90%
发文量
283
期刊介绍: Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信