{"title":"Governance via Explainability","authors":"D. Danks","doi":"10.1093/oxfordhb/9780197579329.013.11","DOIUrl":null,"url":null,"abstract":"AI governance often requires knowing why the system behaved as it did, and explanations are a common way to convey this kind of why-information. Explainable AI (XAI) thus seems to be particularly well-suited to governance; one might even think that explainability is a prerequisite for AI governance. This chapter explores this intuitively plausible route of AI governance via explainability. The core challenge is that governance, explanations, and XAI are all significantly more complex than this intuitive connection suggests, creating the risk that the explanations provided by XAI are not the kind required for governance. This chapter thus first provides a high-level overview of three types of XAI that differ based on who generates the explanation (AI vs. human) and the grounding of the explanation (facts about system vs. plausibility of the story). These different types of XAI each presuppose a substantive theory of explanations, so the chapter then provides an overview of both philosophical and psychological theories of explanation. Finally, these pieces are brought together to provide a concrete framework for using XAI to create, support, or enable many of the key functions of AI governance. XAI systems are not necessarily more governable than non-XAI systems, nor is explainability a solution for all challenges of AI governance. However, explainability does provide a valuable tool in the design and implementation of AI governance mechanisms.","PeriodicalId":348006,"journal":{"name":"The Oxford Handbook of AI Governance","volume":"235 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Oxford Handbook of AI Governance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/oxfordhb/9780197579329.013.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
AI governance often requires knowing why the system behaved as it did, and explanations are a common way to convey this kind of why-information. Explainable AI (XAI) thus seems to be particularly well-suited to governance; one might even think that explainability is a prerequisite for AI governance. This chapter explores this intuitively plausible route of AI governance via explainability. The core challenge is that governance, explanations, and XAI are all significantly more complex than this intuitive connection suggests, creating the risk that the explanations provided by XAI are not the kind required for governance. This chapter thus first provides a high-level overview of three types of XAI that differ based on who generates the explanation (AI vs. human) and the grounding of the explanation (facts about system vs. plausibility of the story). These different types of XAI each presuppose a substantive theory of explanations, so the chapter then provides an overview of both philosophical and psychological theories of explanation. Finally, these pieces are brought together to provide a concrete framework for using XAI to create, support, or enable many of the key functions of AI governance. XAI systems are not necessarily more governable than non-XAI systems, nor is explainability a solution for all challenges of AI governance. However, explainability does provide a valuable tool in the design and implementation of AI governance mechanisms.