Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management

IF 10.1 1区 社会学 Q1 SOCIAL ISSUES
Ivan Kekez , Lode Lauwaert , Nina Begičević Ređep
{"title":"Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management","authors":"Ivan Kekez ,&nbsp;Lode Lauwaert ,&nbsp;Nina Begičević Ređep","doi":"10.1016/j.techsoc.2025.102818","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents a systematic review of 64 papers using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) of research on bias and discrimination in the context of using Artificial Intelligence (AI). Specifically, while limiting the scope to research in HRM, it aims to answer three questions that are relevant to the research community. The first question is whether research papers define the terms 'bias' and 'discrimination', and if so how. Second, given that there are different forms of bias and discrimination, the question is exactly which ones are being investigated. Are there any forms of bias and discrimination that are underrepresented? The third question is whether a negativity bias exists in research on bias and discrimination in the context of AI. The answers to the first two questions point to some research problems. The review shows that in a substantial number of papers, the terms 'bias' and 'discrimination' are not or hardly defined. Furthermore, there is a disproportionate focus among researchers on bias and discrimination related to skin tone (racism) and gender (sexism). In the discussion, we provide reasons why this is undesirable for both scientific and extratheoretical reasons. The answer to the last question is negative. There is a relatively good balance between research that zooms in on the positive effects of AI on bias and discrimination, and research that deals with AI leading to (more) bias and discrimination.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"81 ","pages":"Article 102818"},"PeriodicalIF":10.1000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25000089","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a systematic review of 64 papers using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) of research on bias and discrimination in the context of using Artificial Intelligence (AI). Specifically, while limiting the scope to research in HRM, it aims to answer three questions that are relevant to the research community. The first question is whether research papers define the terms 'bias' and 'discrimination', and if so how. Second, given that there are different forms of bias and discrimination, the question is exactly which ones are being investigated. Are there any forms of bias and discrimination that are underrepresented? The third question is whether a negativity bias exists in research on bias and discrimination in the context of AI. The answers to the first two questions point to some research problems. The review shows that in a substantial number of papers, the terms 'bias' and 'discrimination' are not or hardly defined. Furthermore, there is a disproportionate focus among researchers on bias and discrimination related to skin tone (racism) and gender (sexism). In the discussion, we provide reasons why this is undesirable for both scientific and extratheoretical reasons. The answer to the last question is negative. There is a relatively good balance between research that zooms in on the positive effects of AI on bias and discrimination, and research that deals with AI leading to (more) bias and discrimination.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
17.90
自引率
14.10%
发文量
316
审稿时长
60 days
期刊介绍: Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信