Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener
{"title":"盒子里有什么?公共行政计算辅助决策中可解释性的法律要求","authors":"Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener","doi":"10.2139/ssrn.3402974","DOIUrl":null,"url":null,"abstract":"Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.","PeriodicalId":369466,"journal":{"name":"Political Economy: Structure & Scope of Government eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration\",\"authors\":\"Henrik Palmer Olsen, Jacob Livingston Slosser, Thomas T. Hildebrandt, Cornelius Wiesener\",\"doi\":\"10.2139/ssrn.3402974\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.\",\"PeriodicalId\":369466,\"journal\":{\"name\":\"Political Economy: Structure & Scope of Government eJournal\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Political Economy: Structure & Scope of Government eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3402974\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Political Economy: Structure & Scope of Government eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3402974","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration
Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.