We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task

Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter
{"title":"We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task","authors":"Christopher A. Sanchez ,&nbsp;Lena Hildenbrand ,&nbsp;Naomi Fitter","doi":"10.1016/j.chbah.2025.100154","DOIUrl":null,"url":null,"abstract":"<div><div>The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100154"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.
我们看到的是我们自己:人类如何应对人工智能在社会决策任务中感知到的不公平行为
人工智能(AI)系统在许多日常环境中的扩散强调了更好地理解人类如何与这些系统交互的必要性。先前的研究表明,在许多应用环境中,个人认为这些系统比人类同行更少偏见,因此更值得信赖的决策者。目前的研究检验了当被置于包含强烈社会成分的决策任务(即最后通牒游戏)中时,这种普遍假设是否真的成立。人工智能对手的拟人化外观也被操纵,以确定视觉外观是否也有助于反应行为。结果表明,参与者将AI代理与人类等同对待,而不是将其视为非智能(例如,基于随机数生成器的)系统。这既表现在他们如何回应AI系统的提议,也表现在他们如何公平对待AI对手。目前的结果表明,人类对待人工智能系统与对待其他人非常相似,而不是作为特权决策者,这对人类自主团队既有积极的影响,也有消极的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信