Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter
{"title":"We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task","authors":"Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter","doi":"10.1016/j.chbah.2025.100154","DOIUrl":null,"url":null,"abstract":"<div><div>The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100154"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.