{"title":"Shaping the fairness journey: The roles of AI literacy, explanation, and interpersonal interaction in AI interviews","authors":"Yi Xu, Zhiyun Chen, Mengyuan Dong","doi":"10.1016/j.ijhcs.2025.103629","DOIUrl":null,"url":null,"abstract":"<div><div>Grounded in organizational justice theory, this two-study investigation provides a comprehensive examination of fairness perceptions across the entire AI interviews process. Through a three-stage experimental design (Study 1, <em>N</em> = 113; Study 2, <em>N</em> = 206), we explored how design and outcome factors influence procedural and distributive justice. We manipulated the AI’s explanation (With vs. Without) and level of interpersonal interaction (High vs. low) during the interview process, and the interview result (Pass vs. Fail) and decision agent (100% AI vs. 50% AI + 50% Human) in the post-decision stage. Results indicate that while candidate AI literacy, human-in-the-loop decision-making, and positive outcomes consistently improved fairness perceptions, the effects of system design were more complex. Design features intended to enhance the user experience, such as high AI interactivity and detailed explanations, improved aspects of procedural justice during the interview. Yet, these sometimes backfired by diminishing the perceived distributive justice of the final decision. This reveals a critical tension between a positive process experience and a fair outcome evaluation. These complex effects underscore the practical need for a holistic design approach that manages the entire candidate journey. Fair AI systems require not only improving candidate literacy but also carefully designing system explanations to manage applicant expectations effectively.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103629"},"PeriodicalIF":5.1000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925001867","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
Grounded in organizational justice theory, this two-study investigation provides a comprehensive examination of fairness perceptions across the entire AI interviews process. Through a three-stage experimental design (Study 1, N = 113; Study 2, N = 206), we explored how design and outcome factors influence procedural and distributive justice. We manipulated the AI’s explanation (With vs. Without) and level of interpersonal interaction (High vs. low) during the interview process, and the interview result (Pass vs. Fail) and decision agent (100% AI vs. 50% AI + 50% Human) in the post-decision stage. Results indicate that while candidate AI literacy, human-in-the-loop decision-making, and positive outcomes consistently improved fairness perceptions, the effects of system design were more complex. Design features intended to enhance the user experience, such as high AI interactivity and detailed explanations, improved aspects of procedural justice during the interview. Yet, these sometimes backfired by diminishing the perceived distributive justice of the final decision. This reveals a critical tension between a positive process experience and a fair outcome evaluation. These complex effects underscore the practical need for a holistic design approach that manages the entire candidate journey. Fair AI systems require not only improving candidate literacy but also carefully designing system explanations to manage applicant expectations effectively.
在组织公正理论的基础上,这项两项研究调查提供了对整个人工智能面试过程中公平观念的全面检查。通过三阶段实验设计(研究1,N = 113;研究2,N = 206),我们探讨了设计和结果因素如何影响程序和分配公正。我们在面试过程中操纵了AI的解释(With vs. Without)和人际互动水平(High vs. low),并在决策后阶段操纵了面试结果(Pass vs. Fail)和决策代理(100% AI vs. 50% AI + 50%人类)。结果表明,虽然候选人的人工智能素养、人在循环中的决策和积极的结果持续提高了公平观念,但系统设计的影响更为复杂。旨在增强用户体验的设计功能,如高人工智能交互性和详细的解释,提高了面试过程中的程序公平性。然而,这些有时会适得其反,因为它们削弱了最终决定的分配公正性。这揭示了积极的过程体验和公平的结果评估之间的关键张力。这些复杂的影响强调了一种管理整个候选人旅程的整体设计方法的实际需求。公平的人工智能系统不仅需要提高候选人的素养,还需要精心设计系统解释,以有效地管理求职者的期望。
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...