Governance via Explainability

D. Danks
{"title":"Governance via Explainability","authors":"D. Danks","doi":"10.1093/oxfordhb/9780197579329.013.11","DOIUrl":null,"url":null,"abstract":"AI governance often requires knowing why the system behaved as it did, and explanations are a common way to convey this kind of why-information. Explainable AI (XAI) thus seems to be particularly well-suited to governance; one might even think that explainability is a prerequisite for AI governance. This chapter explores this intuitively plausible route of AI governance via explainability. The core challenge is that governance, explanations, and XAI are all significantly more complex than this intuitive connection suggests, creating the risk that the explanations provided by XAI are not the kind required for governance. This chapter thus first provides a high-level overview of three types of XAI that differ based on who generates the explanation (AI vs. human) and the grounding of the explanation (facts about system vs. plausibility of the story). These different types of XAI each presuppose a substantive theory of explanations, so the chapter then provides an overview of both philosophical and psychological theories of explanation. Finally, these pieces are brought together to provide a concrete framework for using XAI to create, support, or enable many of the key functions of AI governance. XAI systems are not necessarily more governable than non-XAI systems, nor is explainability a solution for all challenges of AI governance. However, explainability does provide a valuable tool in the design and implementation of AI governance mechanisms.","PeriodicalId":348006,"journal":{"name":"The Oxford Handbook of AI Governance","volume":"235 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Oxford Handbook of AI Governance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/oxfordhb/9780197579329.013.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

AI governance often requires knowing why the system behaved as it did, and explanations are a common way to convey this kind of why-information. Explainable AI (XAI) thus seems to be particularly well-suited to governance; one might even think that explainability is a prerequisite for AI governance. This chapter explores this intuitively plausible route of AI governance via explainability. The core challenge is that governance, explanations, and XAI are all significantly more complex than this intuitive connection suggests, creating the risk that the explanations provided by XAI are not the kind required for governance. This chapter thus first provides a high-level overview of three types of XAI that differ based on who generates the explanation (AI vs. human) and the grounding of the explanation (facts about system vs. plausibility of the story). These different types of XAI each presuppose a substantive theory of explanations, so the chapter then provides an overview of both philosophical and psychological theories of explanation. Finally, these pieces are brought together to provide a concrete framework for using XAI to create, support, or enable many of the key functions of AI governance. XAI systems are not necessarily more governable than non-XAI systems, nor is explainability a solution for all challenges of AI governance. However, explainability does provide a valuable tool in the design and implementation of AI governance mechanisms.
通过可解释性进行治理
AI治理通常需要知道系统为什么会这样做,而解释是传达这种“为什么”信息的常见方式。因此,可解释的人工智能(XAI)似乎特别适合治理;有人甚至会认为,可解释性是人工智能治理的先决条件。本章通过可解释性探讨人工智能治理这一直观可行的途径。核心挑战是治理、解释和XAI都比这种直观的联系所暗示的要复杂得多,这就造成了XAI提供的解释不是治理所需的那种风险。因此,本章首先提供了三种类型的XAI的高级概述,这些类型基于谁生成解释(AI vs.人类)和解释的基础(关于系统的事实vs.故事的合理性)而有所不同。这些不同类型的XAI都以一种实质性的解释理论为前提,因此本章随后提供了哲学和心理学解释理论的概述。最后,将这些部分组合在一起,为使用XAI创建、支持或启用AI治理的许多关键功能提供了一个具体的框架。XAI系统并不一定比非XAI系统更容易治理,可解释性也不是解决AI治理所有挑战的解决方案。然而,可解释性确实为人工智能治理机制的设计和实施提供了一个有价值的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信