{"title":"将某些东西命名为集体并不能使其成为集体:算法歧视和诉诸司法","authors":"Jenni Hakkarainen","doi":"10.14763/2021.4.1600","DOIUrl":null,"url":null,"abstract":"The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on datacentrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Introduction: Technology, discrimination and access to justice We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality. Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples. Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal 1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021. 2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021. 3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021. 2 Internet Policy Review 10(4) | 2021","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Naming something collective does not make it so: algorithmic discrimination and access to justice\",\"authors\":\"Jenni Hakkarainen\",\"doi\":\"10.14763/2021.4.1600\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on datacentrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Introduction: Technology, discrimination and access to justice We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality. Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples. Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal 1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021. 2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021. 3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021. 2 Internet Policy Review 10(4) | 2021\",\"PeriodicalId\":219999,\"journal\":{\"name\":\"Internet Policy Rev.\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet Policy Rev.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14763/2021.4.1600\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.4.1600","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
摘要
本文对程序法解决和纠正算法歧视的能力提出了质疑。它认为算法歧视是一种集体现象,因此对其进行法律保护需要是集体的。法律程序是嵌入价值的技术和设计对象,这些价值会影响它们执行其构建任务的可用性。根据科学技术研究(STS)和女权主义对法律的批评,文章认为程序法未能解决算法歧视问题,因为法律保护是建立在数据中心主义和以个人为中心的法律之上的。关于新程序设计的未来,它建议以事前保护的形式进行集体补救,这是一个有希望的前进方向。本文为《互联网政策评论》特刊《女权主义数据保护》的一部分,由Jens T. Theilen、Andreas Baur、Felix Bieker、Regina Ammicht Quinn、Marit Hansen和Gloria González Fuster特邀编辑。导言:技术、歧视和诉诸司法。歧视可以是有意的,也可以是无意识的,因为人类对他人有偏见。这些态度成为机构及其决策过程的组成部分,最终在更广泛的范围内促进不平等。人类并不是唯一嵌入和调解价值观的人。事物也被声称具有政治(Marres, 2012;赢家,1980)。参与技术人工制品设计的参与者的预期和非预期价值是通过这些技术来实现的(Pfaffenberger, 1992)。因此,技术可以调解偏见态度和不平等。人工智能(AI)领域的最新进展引发了人们对数据驱动技术(更具体地说,是自动化决策工具)如何调解歧视的担忧。这些担忧已经变成了现实,因为人们发现,例如在招聘中使用的计算机程序会产生有偏见的决定。女权主义学者Cathy O 'Neil认为,通过自动化的社会福利系统和信用评分,算法歧视最有可能影响少数民族、低收入人群或残疾人(O 'Neil, 2016)。此外,判别算法已经嵌入到预测性警务系统和平台经济治理中,这只是举两个例子。尽管意识日益增强,但目前的研究和机构对歧视的反应在一定程度上受到阻碍,因为它们无法认识到算法歧视与歧视研究的悠久历史之间的联系,以及技术在后现代社会中发挥的作用。例如,自20世纪60年代以来,非技术特定的当代女权主义一直认为,歧视是一种集体经验问题,是由社会和法律促成的。请参阅https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G,页面访问19.5.2021。2. 请参阅https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/,页面访问19.5.2021。3.请参阅https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/,页面访问19.5.2021。2互联网政策评论10(4)| 2021
Naming something collective does not make it so: algorithmic discrimination and access to justice
The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on datacentrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Introduction: Technology, discrimination and access to justice We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality. Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples. Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal 1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021. 2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithmsracist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021. 3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021. 2 Internet Policy Review 10(4) | 2021