{"title":"算法可解释性与法律推理","authors":"Zsolt Ződi","doi":"10.1080/20508840.2022.2033945","DOIUrl":null,"url":null,"abstract":"ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.","PeriodicalId":42455,"journal":{"name":"Theory and Practice of Legislation","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Algorithmic explainability and legal reasoning\",\"authors\":\"Zsolt Ződi\",\"doi\":\"10.1080/20508840.2022.2033945\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.\",\"PeriodicalId\":42455,\"journal\":{\"name\":\"Theory and Practice of Legislation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2022-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Theory and Practice of Legislation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/20508840.2022.2033945\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theory and Practice of Legislation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/20508840.2022.2033945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.
期刊介绍:
The Theory and Practice of Legislation aims to offer an international and interdisciplinary forum for the examination of legislation. The focus of the journal, which succeeds the former title Legisprudence, remains with legislation in its broadest sense. Legislation is seen as both process and product, reflection of theoretical assumptions and a skill. The journal addresses formal legislation, and its alternatives (such as covenants, regulation by non-state actors etc.). The editors welcome articles on systematic (as opposed to historical) issues, including drafting techniques, the introduction of open standards, evidence-based drafting, pre- and post-legislative scrutiny for effectiveness and efficiency, the utility and necessity of codification, IT in legislation, the legitimacy of legislation in view of fundamental principles and rights, law and language, and the link between legislator and judge. Comparative and interdisciplinary approaches are encouraged. But dogmatic descriptions of positive law are outside the scope of the journal. The journal offers a combination of themed issues and general issues. All articles are submitted to double blind review.