算法决策的贫乏公共性

IF 1.4 2区 社会学 Q1 LAW
Neli Frost
{"title":"算法决策的贫乏公共性","authors":"Neli Frost","doi":"10.1093/ojls/gqae027","DOIUrl":null,"url":null,"abstract":"\n The increasing use of machine learning (ML) in public administration requires that we think carefully about the political and legal constraints imposed on public decision making. These developments confront us with the following interrelated questions: can algorithmic public decisions be truly ‘public’? And, to what extent does the use of ML models compromise the ‘publicness’ of such decisions? This article is part of a broader inquiry into the myriad ways in which digital and AI technologies transform the fabric of our democratic existence by mutating the ‘public’. Focusing on the site of public administration, the article develops a conception of publicness that is grounded in a view of public administrations as communities of practice. These communities operate through dialogical, critical and synergetic interactions that allow them to track—as faithfully as possible—the public’s heterogeneous view of its interests, and reify these interests in decision making. Building on this theorisation, the article suggests that the use of ML models in public decision making inevitably generates an impoverished publicness, and thus undermines the potential of public administrations to operate as a locus of democratic construction. The article thus advocates for a reconsideration of the ways in which administrative law problematises and addresses the harms of algorithmic decision making.","PeriodicalId":47225,"journal":{"name":"Oxford Journal of Legal Studies","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Impoverished Publicness of Algorithmic Decision Making\",\"authors\":\"Neli Frost\",\"doi\":\"10.1093/ojls/gqae027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n The increasing use of machine learning (ML) in public administration requires that we think carefully about the political and legal constraints imposed on public decision making. These developments confront us with the following interrelated questions: can algorithmic public decisions be truly ‘public’? And, to what extent does the use of ML models compromise the ‘publicness’ of such decisions? This article is part of a broader inquiry into the myriad ways in which digital and AI technologies transform the fabric of our democratic existence by mutating the ‘public’. Focusing on the site of public administration, the article develops a conception of publicness that is grounded in a view of public administrations as communities of practice. These communities operate through dialogical, critical and synergetic interactions that allow them to track—as faithfully as possible—the public’s heterogeneous view of its interests, and reify these interests in decision making. Building on this theorisation, the article suggests that the use of ML models in public decision making inevitably generates an impoverished publicness, and thus undermines the potential of public administrations to operate as a locus of democratic construction. The article thus advocates for a reconsideration of the ways in which administrative law problematises and addresses the harms of algorithmic decision making.\",\"PeriodicalId\":47225,\"journal\":{\"name\":\"Oxford Journal of Legal Studies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Oxford Journal of Legal Studies\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1093/ojls/gqae027\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Oxford Journal of Legal Studies","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1093/ojls/gqae027","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

随着机器学习(ML)在公共管理领域的应用日益广泛,我们必须认真思考公共决策所面临的政治和法律限制。这些发展使我们面临以下相互关联的问题:算法公共决策能否真正 "公共"?使用 ML 模型会在多大程度上损害此类决策的 "公共性"?本文是对数字和人工智能技术通过改变 "公共性 "来改变我们民主生存结构的无数方式的广泛探究的一部分。文章以公共行政领域为重点,提出了公共性的概念,其基础是将公共行政视为实践社区。这些社区通过对话、批判和协同互动来运作,从而尽可能忠实地追踪公众对其利益的不同看法,并在决策中重申这些利益。在此理论基础上,文章提出,在公共决策中使用 ML 模型不可避免地会产生贫乏的公共性,从而削弱公共管理作为民主建设场所的潜力。因此,文章主张重新考虑行政法将算法决策的危害问题化并加以解决的方式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Impoverished Publicness of Algorithmic Decision Making
The increasing use of machine learning (ML) in public administration requires that we think carefully about the political and legal constraints imposed on public decision making. These developments confront us with the following interrelated questions: can algorithmic public decisions be truly ‘public’? And, to what extent does the use of ML models compromise the ‘publicness’ of such decisions? This article is part of a broader inquiry into the myriad ways in which digital and AI technologies transform the fabric of our democratic existence by mutating the ‘public’. Focusing on the site of public administration, the article develops a conception of publicness that is grounded in a view of public administrations as communities of practice. These communities operate through dialogical, critical and synergetic interactions that allow them to track—as faithfully as possible—the public’s heterogeneous view of its interests, and reify these interests in decision making. Building on this theorisation, the article suggests that the use of ML models in public decision making inevitably generates an impoverished publicness, and thus undermines the potential of public administrations to operate as a locus of democratic construction. The article thus advocates for a reconsideration of the ways in which administrative law problematises and addresses the harms of algorithmic decision making.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.50
自引率
8.30%
发文量
31
期刊介绍: The Oxford Journal of Legal Studies is published on behalf of the Faculty of Law in the University of Oxford. It is designed to encourage interest in all matters relating to law, with an emphasis on matters of theory and on broad issues arising from the relationship of law to other disciplines. No topic of legal interest is excluded from consideration. In addition to traditional questions of legal interest, the following are all within the purview of the journal: comparative and international law, the law of the European Community, legal history and philosophy, and interdisciplinary material in areas of relevance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信