盒子里有什么?公共行政计算辅助决策中可解释性的法律要求

Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener
{"title":"盒子里有什么?公共行政计算辅助决策中可解释性的法律要求","authors":"Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener","doi":"10.2139/ssrn.3402974","DOIUrl":null,"url":null,"abstract":"Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.","PeriodicalId":369466,"journal":{"name":"Political Economy: Structure & Scope of Government eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration\",\"authors\":\"Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener\",\"doi\":\"10.2139/ssrn.3402974\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.\",\"PeriodicalId\":369466,\"journal\":{\"name\":\"Political Economy: Structure & Scope of Government eJournal\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Political Economy: Structure & Scope of Government eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3402974\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Political Economy: Structure & Scope of Government eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3402974","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

摘要

每天都有数以百万计的行政事务发生。保险政策、信用评估、许可证和福利申请等等,都会被创建、调用和评估。虽然这些交易通常被视为现代生活的陈腐,但它们往往具有重要意义。如果这种决定体现在政府的行政程序中,它们必须符合行政法规定的要求,其中之一是可解释性的要求。越来越多的这些任务正在通过算法决策(ADM)系统实现完全或半自动化。由于担心这些ADM系统中可怕的黑盒子的不透明性,无数的道德准则已经被制定出来,以对抗计算透明度的缺乏。与其在已经拥挤不堪的伦理学文献中再增加一个伦理框架,我们更关注具体的法律方法,并问:可解释性实际上需要什么?使用比较方法,我们调查了使用计算工具可以做出此类决定的程度,以及在什么标题下可以检查其与可解释性的法律要求的兼容性。我们评估了关于人类和计算机辅助决策的可解释性实际上需要什么,以及可以观察到哪些最近的立法趋势(如果有的话)。我们还批评了该领域不愿意应用行政法中已经确立的可解释性标准:人的标准。最后,我们介绍了所谓的“行政图灵测试”,它可以用来不断验证和加强人工智能支持的决策。通过这种方法,我们提供了一个可解释性的基准,在此基础上,算法决策的未来应用可以在更广泛的欧洲背景下进行衡量,而不会对其实施造成不适当的负担。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration
Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信