Preventing Racial Bias in Federal AI

M. Livingston
{"title":"Preventing Racial Bias in Federal AI","authors":"M. Livingston","doi":"10.38126/jspg160205","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) systems are increasingly used by the US federal government to replace or support decision making. AI is a computer-based system trained to recognize patterns in data and to apply these patterns to form predictions about new data for a specific task. AI is often viewed as a neutral technological tool, bringing efficiency, objectivity and accuracy to administrative functions, citizen access to services, and regulatory enforcement. However, AI can also encode and amplify the biases of society. Choices on design, implementation, and use can embed existing racial inequalities into AI, leading to a racially biased AI system producing inaccurate predictions or to harmful consequences for racial groups. Racially discriminatory AI systems have already affected public systems such as criminal justice, healthcare, financial systems and housing. This memo addresses the primary causes for the development, deployment and use of racially biased AI systems and suggests three responses to ensure that federal agencies realize the benefits of AI and protect against racially disparate impact. There are three actions that federal agencies must take to prevent racial bias: 1) increase racial diversity in AI designers, 2) implement AI impact assessment, 3) establish procedures for staff to contest automated decisions. Each proposal addresses a different stage in the lifecycle of AI used by federal agencies and helps align US policy with the Organization for Economic Co-operation and Development (OECD) Principles on Artificial Intelligence.","PeriodicalId":171493,"journal":{"name":"Impacts of Emerging Technologies on Inequality and Sustainability","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Impacts of Emerging Technologies on Inequality and Sustainability","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.38126/jspg160205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Artificial Intelligence (AI) systems are increasingly used by the US federal government to replace or support decision making. AI is a computer-based system trained to recognize patterns in data and to apply these patterns to form predictions about new data for a specific task. AI is often viewed as a neutral technological tool, bringing efficiency, objectivity and accuracy to administrative functions, citizen access to services, and regulatory enforcement. However, AI can also encode and amplify the biases of society. Choices on design, implementation, and use can embed existing racial inequalities into AI, leading to a racially biased AI system producing inaccurate predictions or to harmful consequences for racial groups. Racially discriminatory AI systems have already affected public systems such as criminal justice, healthcare, financial systems and housing. This memo addresses the primary causes for the development, deployment and use of racially biased AI systems and suggests three responses to ensure that federal agencies realize the benefits of AI and protect against racially disparate impact. There are three actions that federal agencies must take to prevent racial bias: 1) increase racial diversity in AI designers, 2) implement AI impact assessment, 3) establish procedures for staff to contest automated decisions. Each proposal addresses a different stage in the lifecycle of AI used by federal agencies and helps align US policy with the Organization for Economic Co-operation and Development (OECD) Principles on Artificial Intelligence.
防止联邦人工智能中的种族偏见
美国联邦政府越来越多地使用人工智能(AI)系统来取代或支持决策。人工智能是一种基于计算机的系统,经过训练可以识别数据中的模式,并应用这些模式对特定任务的新数据进行预测。人工智能通常被视为一种中立的技术工具,为行政职能、公民获得服务和监管执法带来效率、客观性和准确性。然而,人工智能也可以编码和放大社会的偏见。设计、执行和使用上的选择可能会将现有的种族不平等嵌入人工智能中,导致有种族偏见的人工智能系统产生不准确的预测或对种族群体造成有害后果。具有种族歧视的人工智能系统已经影响到刑事司法、医疗、金融系统和住房等公共系统。本备忘录阐述了开发、部署和使用具有种族偏见的人工智能系统的主要原因,并提出了三种应对措施,以确保联邦机构意识到人工智能的好处,并防止种族歧视的影响。联邦机构必须采取三项行动来防止种族偏见:1)增加人工智能设计师的种族多样性;2)实施人工智能影响评估;3)建立员工对自动决策提出质疑的程序。每项提案都涉及联邦机构使用人工智能生命周期的不同阶段,并有助于使美国的政策与经济合作与发展组织(OECD)的人工智能原则保持一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信