算法可解释性与法律推理

IF 1.5 Q1 LAW
Zsolt Ződi
{"title":"算法可解释性与法律推理","authors":"Zsolt Ződi","doi":"10.1080/20508840.2022.2033945","DOIUrl":null,"url":null,"abstract":"ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.","PeriodicalId":42455,"journal":{"name":"Theory and Practice of Legislation","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Algorithmic explainability and legal reasoning\",\"authors\":\"Zsolt Ződi\",\"doi\":\"10.1080/20508840.2022.2033945\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.\",\"PeriodicalId\":42455,\"journal\":{\"name\":\"Theory and Practice of Legislation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2022-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Theory and Practice of Legislation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/20508840.2022.2033945\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theory and Practice of Legislation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/20508840.2022.2033945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 2

摘要

算法可解释性已经成为过去十年关于自动决策(AMD,机器决策)的讨论的关键话题之一。在这一论述中,一个重要的子领域涉及机器做出的决定或输出的可解释性,这些决定或输出会影响一个人的法律地位或一般具有法律含义-简而言之,就是算法法律决定。这些可能是由支持法官、政府机构或私人行为者的软件做出的决定或建议。例如,这可能包括自动拒绝在线信用申请,或者在没有任何人为干预的情况下进行电子招聘,或者预测一个人再犯的可能性。这篇文章是对这一论述的贡献,它声称,由于可解释性已成为数百个道德规范、政策文件和学术著作中的一个突出问题,因此它已成为一个“语义过载”的概念。它已经获得了如此广泛的含义,与许多其他伦理问题和价值观重叠,值得缩小和澄清它的含义。这项研究表明,这一概念应该只用于个人自动化决策,特别是当基于机器学习的软件做出决策时,即“黑盒子”系统。如果术语可解释性只适用于这一领域,它允许我们在法律决策和机器决策之间建立相似之处,从而认识到这一主题是法律推理的问题,部分是语言学的问题。本文的第二个主张是,算法法律决策应该遵循法律推理的模式,将机器输出翻译成一种形式,在这种形式中,决策被解释为规范对事实情况的应用。因此,由于规范和事实应该被转换为算法的数据,因此数据输出应该被反转换为适当的法律理由。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Algorithmic explainability and legal reasoning
ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.50
自引率
10.00%
发文量
23
期刊介绍: The Theory and Practice of Legislation aims to offer an international and interdisciplinary forum for the examination of legislation. The focus of the journal, which succeeds the former title Legisprudence, remains with legislation in its broadest sense. Legislation is seen as both process and product, reflection of theoretical assumptions and a skill. The journal addresses formal legislation, and its alternatives (such as covenants, regulation by non-state actors etc.). The editors welcome articles on systematic (as opposed to historical) issues, including drafting techniques, the introduction of open standards, evidence-based drafting, pre- and post-legislative scrutiny for effectiveness and efficiency, the utility and necessity of codification, IT in legislation, the legitimacy of legislation in view of fundamental principles and rights, law and language, and the link between legislator and judge. Comparative and interdisciplinary approaches are encouraged. But dogmatic descriptions of positive law are outside the scope of the journal. The journal offers a combination of themed issues and general issues. All articles are submitted to double blind review.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信