{"title":"基于 ML 的行政决策和支持系统中的偏见和歧视","authors":"Trang Anh MAC","doi":"10.1016/j.clsr.2024.106070","DOIUrl":null,"url":null,"abstract":"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106070"},"PeriodicalIF":3.3000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bias and discrimination in ML-based systems of administrative decision-making and support\",\"authors\":\"Trang Anh MAC\",\"doi\":\"10.1016/j.clsr.2024.106070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>\",\"PeriodicalId\":51516,\"journal\":{\"name\":\"Computer Law & Security Review\",\"volume\":\"55 \",\"pages\":\"Article 106070\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Law & Security Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0267364924001365\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924001365","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
摘要
1 纪录片《2020 年加布里埃尔-费尔南德斯的审判》(Trials of Gabriel Fernandez in 2020)2 讨论了美国阿勒格尼县自 2016 年起实施的阿勒格尼家庭筛查工具(Allegheny Family Screening Tool,AFST3),该工具旨在预测社会服务系统的介入情况。社会数据分析中心(Centre for Social Data Analytics)联合主任雷玛-瓦伊蒂亚纳坦(Rhema Vaithianathan)4 和儿童数据网络(Children's Data Network)5 成员与艾米丽-普特南-霍恩斯坦(Emily Putnam-Hornstein )6 共同建立了这一示范性筛查工具,整合并分析了存放在国土安全部数据仓库(DHS Data Warehouse)7 中的大量数据,这些数据涉及据称对儿童造成不公正待遇的人的详细信息。他们认为,这可能是解决人工行政系统不堪重负的办法。然而,就像人工智能在现代社会的其他应用一样,在公共部门,算法决策和支持系统也因为数据和算法的偏差而受到谴责8 。因此,这篇拙作是对这些问题--基于人工智能的行政决策与支持系统的偏见和歧视--的一个梳理。首先,我从法律的角度确定了偏见和歧视及其两个定义之间的模糊界限,然后详细阐述了在人工智能系统开发的各个阶段产生偏见的原因,主要是过去的偏见数据源和人类决策的结果、社会和政治背景以及开发者的伦理道德。在同一章中,我介绍了非歧视法律框架,包括它们在自动决策和支持系统方面的应用和与行政法的衔接,以及道德和个人数据保护法规的参与。在下一章中,我试图从法律和技术两个角度概述潜在解决方案的新建议。就前者而言,我的重点是公平性定义和其他可供开发者选择的现有方案,例如工具包、基准数据集、去偏数据等。对于后者,我报告了有关数据集和人工智能系统开发的战略和新建议,以及在不久的将来的实施情况。
Bias and discrimination in ML-based systems of administrative decision-making and support
In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.1 The documentary, Trials of Gabriel Fernandez in 2020,2 has discussed the Allegheny Family Screening Tool (AFST3), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan4, the Centre for Social Data Analytics co-director, and the Children's Data Network5 members, with Emily Putnam-Hornstein6, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse7. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.8 This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.