When AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions

IF 5.8 Q1 PSYCHOLOGY, EXPERIMENTAL
Katarzyna Miazek, Konrad Bocian
{"title":"When AI is fairer than humans: The role of egocentrism in moral and fairness judgments of AI and human decisions","authors":"Katarzyna Miazek,&nbsp;Konrad Bocian","doi":"10.1016/j.chbr.2025.100719","DOIUrl":null,"url":null,"abstract":"<div><div>Algorithmic fairness is a core principle of trustworthy Artificial Intelligence (AI), yet how people perceive fairness in AI decision-making remains understudied. Prior research suggests that moral and fairness judgments are egocentrically biased, favoring self-interested outcomes. Drawing on the Computers Are Social Actors (CASA) framework and egocentric ethics theory we examine whether this bias extends to AI decision-makers, comparing fairness and morality perceptions of AI and human agents. Across three experiments (two preregistered, N = 1880, Prolific US samples), participants evaluated financial decisions made by AI or human agents. Self-interest was manipulated by assigning participants to conditions where they either benefited from, were harmed by, or remained neutral to the decision outcome. Results showed that self-interest significantly biased fairness judgments—decision-makers who made unfair but personally beneficial decisions were perceived as more moral and fairer than those whose decisions benefited others (Studies 1 &amp; 2) or those who made fair but personally costly decisions (Study 3). However, this egocentric bias was weaker for AI than for humans, mediated by a lower perceived mind and reduced liking for AI (Studies 2 &amp; 3). These findings suggest that fairness judgments of AI are not immune to egocentric biases, but are moderated by cognitive and social perceptions of AI versus humans. Our studies challenge the assumption that algorithmic fairness alone is sufficient for achieving fair outcomes. This provides novel insight for AI deployment in high-stakes decision-making domains, highlighting the need to consider both algorithmic fairness and human biases when evaluating AI decisions.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"19 ","pages":"Article 100719"},"PeriodicalIF":5.8000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958825001344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Algorithmic fairness is a core principle of trustworthy Artificial Intelligence (AI), yet how people perceive fairness in AI decision-making remains understudied. Prior research suggests that moral and fairness judgments are egocentrically biased, favoring self-interested outcomes. Drawing on the Computers Are Social Actors (CASA) framework and egocentric ethics theory we examine whether this bias extends to AI decision-makers, comparing fairness and morality perceptions of AI and human agents. Across three experiments (two preregistered, N = 1880, Prolific US samples), participants evaluated financial decisions made by AI or human agents. Self-interest was manipulated by assigning participants to conditions where they either benefited from, were harmed by, or remained neutral to the decision outcome. Results showed that self-interest significantly biased fairness judgments—decision-makers who made unfair but personally beneficial decisions were perceived as more moral and fairer than those whose decisions benefited others (Studies 1 & 2) or those who made fair but personally costly decisions (Study 3). However, this egocentric bias was weaker for AI than for humans, mediated by a lower perceived mind and reduced liking for AI (Studies 2 & 3). These findings suggest that fairness judgments of AI are not immune to egocentric biases, but are moderated by cognitive and social perceptions of AI versus humans. Our studies challenge the assumption that algorithmic fairness alone is sufficient for achieving fair outcomes. This provides novel insight for AI deployment in high-stakes decision-making domains, highlighting the need to consider both algorithmic fairness and human biases when evaluating AI decisions.
当人工智能比人类更公平:自我中心主义在人工智能和人类决策的道德和公平判断中的作用
算法公平是值得信赖的人工智能(AI)的核心原则,但人们如何看待人工智能决策中的公平仍未得到充分研究。先前的研究表明,道德和公平判断是自我中心的偏见,倾向于自利的结果。利用计算机是社会行动者(CASA)框架和自我中心伦理理论,我们研究了这种偏见是否延伸到人工智能决策者,比较了人工智能和人类代理人的公平和道德观念。在三个实验中(两个预注册,N = 1880,多产的美国样本),参与者评估人工智能或人类代理人做出的财务决策。通过将参与者分配到对决策结果有利、受到伤害或保持中立的环境中,操纵了自身利益。结果显示,自身利益显著地影响了公平判断——做出不公平但对个人有利的决策的决策者被认为比那些决策对他人有利的决策者更有道德、更公平(研究1 &;2)或那些做出公平但代价高昂的决定的人(研究3)。然而,这种自我中心偏见在人工智能中比在人类中更弱,这是由较低的感知心智和对人工智能的喜爱程度降低所介导的(研究2和;3). 这些发现表明,人工智能的公平判断并非不受自我中心偏见的影响,而是受到人工智能与人类的认知和社会观念的调节。我们的研究挑战了算法公平性本身足以实现公平结果的假设。这为人工智能在高风险决策领域的部署提供了新的见解,强调了在评估人工智能决策时同时考虑算法公平性和人类偏见的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信