Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles

Jake van der Laan
{"title":"Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles","authors":"Jake van der Laan","doi":"10.2139/ssrn.4242175","DOIUrl":null,"url":null,"abstract":"Abstract The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world. A key informant to any analysis in these cases is the extent to which the model in question is “explainable”. This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).","PeriodicalId":403361,"journal":{"name":"Vietnamese Journal of Legal Sciences","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vietnamese Journal of Legal Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.4242175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world. A key informant to any analysis in these cases is the extent to which the model in question is “explainable”. This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).
人工智能模型的可解释性:技术基础与法理
现在普遍使用人工智能(AI),特别是机器学习驱动的模型来自动化决策提出了新的法律问题。由于所使用模型的复杂性,当自动决策的基本原理不容易确定或可追溯时,一个特别重要的问题就出现了:如何合法地评估和证实这样的决策?如何正确地确定“错误”决定的潜在法律责任?世界各地的组织和政府正在探索这些问题。在这些情况下,任何分析的关键信息都是所讨论的模型在多大程度上是“可解释的”。本文试图提供(1)对机器学习模型的技术组成部分的介绍性概述,以一种没有计算机科学或数学背景的人可以使用的方式,(2)总结加拿大和越南迄今为止对可解释性挑战的回应,(3)分析什么是科学和法律领域的“解释”,(4)初步的法律框架,用于分析特定模型及其预测的解释充分性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信