Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management
{"title":"Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management","authors":"Ivan Kekez , Lode Lauwaert , Nina Begičević Ređep","doi":"10.1016/j.techsoc.2025.102818","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents a systematic review of 64 papers using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) of research on bias and discrimination in the context of using Artificial Intelligence (AI). Specifically, while limiting the scope to research in HRM, it aims to answer three questions that are relevant to the research community. The first question is whether research papers define the terms 'bias' and 'discrimination', and if so how. Second, given that there are different forms of bias and discrimination, the question is exactly which ones are being investigated. Are there any forms of bias and discrimination that are underrepresented? The third question is whether a negativity bias exists in research on bias and discrimination in the context of AI. The answers to the first two questions point to some research problems. The review shows that in a substantial number of papers, the terms 'bias' and 'discrimination' are not or hardly defined. Furthermore, there is a disproportionate focus among researchers on bias and discrimination related to skin tone (racism) and gender (sexism). In the discussion, we provide reasons why this is undesirable for both scientific and extratheoretical reasons. The answer to the last question is negative. There is a relatively good balance between research that zooms in on the positive effects of AI on bias and discrimination, and research that deals with AI leading to (more) bias and discrimination.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"81 ","pages":"Article 102818"},"PeriodicalIF":10.1000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25000089","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a systematic review of 64 papers using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) of research on bias and discrimination in the context of using Artificial Intelligence (AI). Specifically, while limiting the scope to research in HRM, it aims to answer three questions that are relevant to the research community. The first question is whether research papers define the terms 'bias' and 'discrimination', and if so how. Second, given that there are different forms of bias and discrimination, the question is exactly which ones are being investigated. Are there any forms of bias and discrimination that are underrepresented? The third question is whether a negativity bias exists in research on bias and discrimination in the context of AI. The answers to the first two questions point to some research problems. The review shows that in a substantial number of papers, the terms 'bias' and 'discrimination' are not or hardly defined. Furthermore, there is a disproportionate focus among researchers on bias and discrimination related to skin tone (racism) and gender (sexism). In the discussion, we provide reasons why this is undesirable for both scientific and extratheoretical reasons. The answer to the last question is negative. There is a relatively good balance between research that zooms in on the positive effects of AI on bias and discrimination, and research that deals with AI leading to (more) bias and discrimination.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.