Toward the Democratic Regulation of AI Systems: A Prolegomenon

Mariano-Florentino Cuéllar, Aziz Z Huq
{"title":"Toward the Democratic Regulation of AI Systems: A Prolegomenon","authors":"Mariano-Florentino Cuéllar, Aziz Z Huq","doi":"10.2139/ssrn.3671011","DOIUrl":null,"url":null,"abstract":"This essay explores the challenge of regulating “artificial intelligence” (AI) in democracies. We begin with a careful definition of what is being regulated. In contrast to the relatively narrow focus on technical details of computational tools in many discussions about governing AI, we suggest that it is more useful to identify “AI systems” — embedding not only particular algorithms but design choices structuring human-computer interaction and the allocation of responsibility over decisions — as the appropriate object of regulation. We make the case that even in constitutional democracies, regulation of these systems should often depend primarily on how these systems embed forward-looking “policies” and on the social consequences of such policies, rather than on expecting clear answers to deontologically flavored questions about whether these systems violate “rights,” such as those to privacy or non-discrimination. We then canvas some of the challenges associated with carefully-designed, prudent regulation of AI in democracies. We distinguish here between two types of obstacles, each calling for subtly different evaluations. On the one hand, institutional impediments can frustrate an effective democratic response. These sound in the register of political economy, and the path-dependent aspects of regulatory capacity embodied in national and sub-national institutions. On the other hand, ontological impediments can also complicate regulatory response. By this we mean to capture the sense that AI systems can be constitutive of human subjectivity — shaping attitudes, behaviors, and desires — in ways that make the very project of identifying democratic preferences particularly fraught and subject to subversion. As democratic societies contend with the mix of risks and benefits associated with AI systems, candid acknowledgement of the challenges will bring valuable attention to the endogeneity of democratic preferences and to the characteristics of institutions that have an outsized role in shaping how societies evolve.","PeriodicalId":240414,"journal":{"name":"Information Use eJournal","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Use eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3671011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This essay explores the challenge of regulating “artificial intelligence” (AI) in democracies. We begin with a careful definition of what is being regulated. In contrast to the relatively narrow focus on technical details of computational tools in many discussions about governing AI, we suggest that it is more useful to identify “AI systems” — embedding not only particular algorithms but design choices structuring human-computer interaction and the allocation of responsibility over decisions — as the appropriate object of regulation. We make the case that even in constitutional democracies, regulation of these systems should often depend primarily on how these systems embed forward-looking “policies” and on the social consequences of such policies, rather than on expecting clear answers to deontologically flavored questions about whether these systems violate “rights,” such as those to privacy or non-discrimination. We then canvas some of the challenges associated with carefully-designed, prudent regulation of AI in democracies. We distinguish here between two types of obstacles, each calling for subtly different evaluations. On the one hand, institutional impediments can frustrate an effective democratic response. These sound in the register of political economy, and the path-dependent aspects of regulatory capacity embodied in national and sub-national institutions. On the other hand, ontological impediments can also complicate regulatory response. By this we mean to capture the sense that AI systems can be constitutive of human subjectivity — shaping attitudes, behaviors, and desires — in ways that make the very project of identifying democratic preferences particularly fraught and subject to subversion. As democratic societies contend with the mix of risks and benefits associated with AI systems, candid acknowledgement of the challenges will bring valuable attention to the endogeneity of democratic preferences and to the characteristics of institutions that have an outsized role in shaping how societies evolve.
走向人工智能系统的民主监管:一个展望
本文探讨了在民主国家监管“人工智能”(AI)的挑战。我们首先仔细定义一下什么是被监管的。与在许多关于治理人工智能的讨论中相对狭窄地关注计算工具的技术细节相比,我们建议将“人工智能系统”——不仅嵌入特定的算法,还嵌入构建人机交互的设计选择和决策责任分配——作为适当的监管对象更为有用。我们认为,即使在宪政民主国家,对这些制度的监管也应该主要取决于这些制度如何嵌入前瞻性的“政策”,以及这些政策的社会后果,而不是指望对这些制度是否侵犯了“权利”(如隐私权或非歧视权)等带有道义主义色彩的问题给出明确的答案。然后,我们分析了在民主国家精心设计、谨慎监管人工智能所面临的一些挑战。我们在这里区分了两种类型的障碍,每一种都需要微妙不同的评估。一方面,体制障碍会阻碍有效的民主反应。这些在政治经济学和路径依赖方面的监管能力中体现在国家和地方机构中。另一方面,本体论障碍也会使监管反应复杂化。通过这一点,我们的意思是捕捉到人工智能系统可以构成人类主体性的感觉——塑造态度、行为和欲望——以使识别民主偏好的项目特别令人担忧和容易被颠覆的方式。随着民主社会与人工智能系统相关的风险和利益相结合,坦诚地承认这些挑战将使人们对民主偏好的内生性以及在塑造社会发展方式方面发挥巨大作用的机构的特征产生宝贵的关注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信