Private Accountability in an Age of Artificial Intelligence

Sonia K. Katyal
{"title":"Private Accountability in an Age of Artificial Intelligence","authors":"Sonia K. Katyal","doi":"10.1017/9781108680844.004","DOIUrl":null,"url":null,"abstract":"Author(s): Katyal, SK | Abstract: © 2019 American Statistical Association. All Rights Reserved. In this Article, I explore the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. As I argue, the true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights. For this reason, I argue that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, given the state's reluctance to address the issue, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, I discuss a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protection, which I argue carries the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.","PeriodicalId":143749,"journal":{"name":"The Cambridge Handbook of the Law of Algorithms","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"50","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Cambridge Handbook of the Law of Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/9781108680844.004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 50

Abstract

Author(s): Katyal, SK | Abstract: © 2019 American Statistical Association. All Rights Reserved. In this Article, I explore the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. As I argue, the true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights. For this reason, I argue that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, given the state's reluctance to address the issue, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, I discuss a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protection, which I argue carries the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.
人工智能时代的个人责任
作者:Katyal, SK |摘要:©2019美国统计协会。版权所有。在这篇文章中,我探讨了公民权利保护与人工智能(AI)之间即将发生的冲突。虽然这两个法律领域都积累了丰富而发达的学术工作和理论支持,但越来越多的学者正在质疑它们之间的交集。本文认为,围绕算法问责制的问题表明,在新一代关于法律和技术的争议中,存在更深层次、更具结构性的紧张关系。正如我所说,人工智能的真正前景不在于我们向彼此透露的信息,而在于它提出的关于技术、财产和公民权利之间相互作用的问题。出于这个原因,我认为,如果我们只指望国家来解决算法问责问题,那我们就找错地方了。相反,鉴于国家不愿解决这个问题,我们必须转向其他途径,以确保更多的透明度和问责制,这些透明度和问责制来自私营企业,而不是公共监管。算法偏见问题代表了一个重要的民权问题新世界,它在本质上不同于之前的问题。由于我们所处的世界,是私营企业的活动而不是国家的活动引起了人们对隐私、正当程序和歧视的担忧,我们必须把重点放在私营企业在解决这一问题方面的作用上。为此,我讨论了各种工具来帮助消除人工智能的不透明性,包括行为准则、影响声明和举报人保护,我认为这些工具有可能在民权执法中鼓励更大的内生性。最终,通过研究私营企业与公民权利之间的关系,我们或许可以在这一过程中发展出新一代的问责形式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信