{"title":"Naming something collective does not make it so: algorithmic discrimination and access to justice","authors":"Jenni Hakkarainen","doi":"10.14763/2021.4.1600","DOIUrl":null,"url":null,"abstract":"The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on datacentrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Introduction: Technology, discrimination and access to justice We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality. Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples. Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal 1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021. 2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021. 3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021. 2 Internet Policy Review 10(4) | 2021","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.4.1600","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on datacentrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Introduction: Technology, discrimination and access to justice We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality. Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples. Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal 1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021. 2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021. 3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021. 2 Internet Policy Review 10(4) | 2021