Algorithmic Discrimination and Input Accountability under the Civil Rights Acts

Robert P. Bartlett, Adair Morse, N. Wallace, Richard Stanton
{"title":"Algorithmic Discrimination and Input Accountability under the Civil Rights Acts","authors":"Robert P. Bartlett, Adair Morse, N. Wallace, Richard Stanton","doi":"10.2139/ssrn.3674665","DOIUrl":null,"url":null,"abstract":"The disproportionate burden of COVID-19 among communities of color, together with a necessary renewed attention to racial inequalities, have lent new urgency to concerns that algorithmic decision-making can lead to unintentional discrimination against members of historically marginalized groups. These concerns are being expressed through Congressional subpoenas, regulatory investigations, and an increasing number of algorithmic accountability bills pending in both state legislatures and Congress. To date, however, prominent efforts to define algorithmic accountability have tended to focus on output-oriented policies that may facilitate illegitimate discrimination or involve fairness corrections unlikely to be legally valid. Worse still, other approaches focus merely on a model’s predictive accuracy—an approach at odds with long-standing U.S. antidiscrimination law.\r\n\r\nWe provide a workable definition of algorithmic accountability that is rooted in the caselaw addressing statistical discrimination in the context of Title VII of the Civil Rights Act of 1964. Using instruction from the burden-shifting framework, codified to implement Title VII, we formulate a simple statistical test to apply to the design and review of the inputs used in any algorithmic decision-making processes. Application of the test, which we label the input accountability test, constitutes a legally viable, deployable tool that can prevent an algorithmic model from systematically penalizing members of protected groups who are otherwise qualified in a legitimate target characteristic of interest.","PeriodicalId":155642,"journal":{"name":"LSN: Anti-Discrimination Law (Topic)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"LSN: Anti-Discrimination Law (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3674665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

The disproportionate burden of COVID-19 among communities of color, together with a necessary renewed attention to racial inequalities, have lent new urgency to concerns that algorithmic decision-making can lead to unintentional discrimination against members of historically marginalized groups. These concerns are being expressed through Congressional subpoenas, regulatory investigations, and an increasing number of algorithmic accountability bills pending in both state legislatures and Congress. To date, however, prominent efforts to define algorithmic accountability have tended to focus on output-oriented policies that may facilitate illegitimate discrimination or involve fairness corrections unlikely to be legally valid. Worse still, other approaches focus merely on a model’s predictive accuracy—an approach at odds with long-standing U.S. antidiscrimination law. We provide a workable definition of algorithmic accountability that is rooted in the caselaw addressing statistical discrimination in the context of Title VII of the Civil Rights Act of 1964. Using instruction from the burden-shifting framework, codified to implement Title VII, we formulate a simple statistical test to apply to the design and review of the inputs used in any algorithmic decision-making processes. Application of the test, which we label the input accountability test, constitutes a legally viable, deployable tool that can prevent an algorithmic model from systematically penalizing members of protected groups who are otherwise qualified in a legitimate target characteristic of interest.
算法歧视与民权法案下的输入问责
2019冠状病毒病在有色人种群体中造成的不成比例的负担,以及对种族不平等的必要重新关注,使人们更加迫切地担心算法决策可能导致对历史上边缘化群体成员的无意歧视。这些担忧通过国会传票、监管调查以及越来越多的算法问责法案在州立法机构和国会中得到表达。然而,迄今为止,定义算法问责制的突出努力往往侧重于以产出为导向的政策,这些政策可能促进非法歧视或涉及不太可能具有法律效力的公平纠正。更糟糕的是,其他方法仅仅关注模型的预测准确性——这种方法与美国长期存在的反歧视法不一致。我们提供了一个可行的算法问责制定义,该定义植根于1964年《民权法案》第七章背景下处理统计歧视的判例法。根据为实施第七章而编纂的负担转移框架的指导,我们制定了一个简单的统计测试,适用于任何算法决策过程中使用的输入的设计和审查。该测试的应用,我们将其称为输入问责制测试,构成了一种法律上可行的、可部署的工具,可以防止算法模型系统地惩罚受保护群体的成员,这些成员在其他方面符合合法目标的利益特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信