{"title":"人工智能引发的冷漠:不公平的人工智能会降低亲社会性","authors":"Raina Zexuan Zhang , Ellie J. Kyung , Chiara Longoni , Luca Cian , Kellen Mrkva","doi":"10.1016/j.cognition.2024.105937","DOIUrl":null,"url":null,"abstract":"<div><div>The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined <em>N</em> = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term <em>AI-induced indifference</em>. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.</div><div>All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: <span><span>https://bit.ly/OSF_unfairAI</span><svg><path></path></svg></span></div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105937"},"PeriodicalIF":2.8000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI-induced indifference: Unfair AI reduces prosociality\",\"authors\":\"Raina Zexuan Zhang , Ellie J. Kyung , Chiara Longoni , Luca Cian , Kellen Mrkva\",\"doi\":\"10.1016/j.cognition.2024.105937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined <em>N</em> = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term <em>AI-induced indifference</em>. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.</div><div>All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: <span><span>https://bit.ly/OSF_unfairAI</span><svg><path></path></svg></span></div></div>\",\"PeriodicalId\":48455,\"journal\":{\"name\":\"Cognition\",\"volume\":\"254 \",\"pages\":\"Article 105937\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognition\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0010027724002233\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027724002233","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
AI-induced indifference: Unfair AI reduces prosociality
The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined N = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term AI-induced indifference. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.
All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: https://bit.ly/OSF_unfairAI
期刊介绍:
Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.