{"title":"Socially excluded employees prefer algorithmic evaluation to human assessment: The moderating role of an interdependent culture","authors":"Yoko Sugitani , Taku Togawa , Kosuke Motoki","doi":"10.1016/j.chbah.2025.100152","DOIUrl":null,"url":null,"abstract":"<div><div>Organizations have embraced artificial intelligence (AI) technology for personnel assessments such as document screening, interviews, and evaluations. However, some studies have reported employees' aversive reactions to AI-based assessment, while others have shown their appreciation for AI. This study focused on the effect of workplace social context, specifically social exclusion, on employees’ attitudes toward AI-based personnel assessment. Drawing on cognitive dissonance theory, we hypothesized that socially excluded employees perceive human evaluation as unfair, leading to their belief that AI-based assessments are fairer and, in turn, a favorable attitude toward AI evaluation. Through three experiments wherein workplace social relationships (social exclusion vs. inclusion) were manipulated, we demonstrated that socially excluded employees showed a higher positive attitude toward algorithmic assessment compared with those who were socially included. Further, this effect was mediated by perceived fairness of AI assessment, and more evident in an interdependent (but not independent) self-construal culture. These findings offer novel insights into psychological research on computer use in professional practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100152"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Organizations have embraced artificial intelligence (AI) technology for personnel assessments such as document screening, interviews, and evaluations. However, some studies have reported employees' aversive reactions to AI-based assessment, while others have shown their appreciation for AI. This study focused on the effect of workplace social context, specifically social exclusion, on employees’ attitudes toward AI-based personnel assessment. Drawing on cognitive dissonance theory, we hypothesized that socially excluded employees perceive human evaluation as unfair, leading to their belief that AI-based assessments are fairer and, in turn, a favorable attitude toward AI evaluation. Through three experiments wherein workplace social relationships (social exclusion vs. inclusion) were manipulated, we demonstrated that socially excluded employees showed a higher positive attitude toward algorithmic assessment compared with those who were socially included. Further, this effect was mediated by perceived fairness of AI assessment, and more evident in an interdependent (but not independent) self-construal culture. These findings offer novel insights into psychological research on computer use in professional practices.