{"title":"Algorithm-facilitated discrimination: a socio-legal study of the use by employers of artificial intelligence hiring systems","authors":"NATALIE SHEARD","doi":"10.1111/jols.12535","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) hiring systems (AHSs) are used by employers every day to screen and shortlist job candidates. Despite this, substantial gaps exist in our understanding of the real – as opposed to theoretical – risks of discrimination when these systems are deployed. This article reports on findings from qualitative empirical research investigating the use of AHSs by Australian employers. It demonstrates that the way in which these systems are operated in practice creates serious risks of algorithm-facilitated discrimination. This may arise from the data, the use of proxies, the system's implementation, new structural barriers, a failure to provide reasonable adjustments, or the facilitation of intentional discrimination. These findings are significant, make an original contribution to an emerging field of research, and are relevant in any jurisdiction where an AHS is in use. There is a lot at stake when such discrimination occurs; as one research participant acknowledged, a ‘job application is literally a person's attempt to change their life with a new job’.</p>","PeriodicalId":51544,"journal":{"name":"Journal of Law and Society","volume":"52 2","pages":"269-291"},"PeriodicalIF":1.3000,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jols.12535","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Law and Society","FirstCategoryId":"90","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jols.12535","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) hiring systems (AHSs) are used by employers every day to screen and shortlist job candidates. Despite this, substantial gaps exist in our understanding of the real – as opposed to theoretical – risks of discrimination when these systems are deployed. This article reports on findings from qualitative empirical research investigating the use of AHSs by Australian employers. It demonstrates that the way in which these systems are operated in practice creates serious risks of algorithm-facilitated discrimination. This may arise from the data, the use of proxies, the system's implementation, new structural barriers, a failure to provide reasonable adjustments, or the facilitation of intentional discrimination. These findings are significant, make an original contribution to an emerging field of research, and are relevant in any jurisdiction where an AHS is in use. There is a lot at stake when such discrimination occurs; as one research participant acknowledged, a ‘job application is literally a person's attempt to change their life with a new job’.
期刊介绍:
Established as the leading British periodical for Socio-Legal Studies The Journal of Law and Society offers an interdisciplinary approach. It is committed to achieving a broad international appeal, attracting contributions and addressing issues from a range of legal cultures, as well as theoretical concerns of cross- cultural interest. It produces an annual special issue, which is also published in book form. It has a widely respected Book Review section and is cited all over the world. Challenging, authoritative and topical, the journal appeals to legal researchers and practitioners as well as sociologists, criminologists and other social scientists.