人工智能系统与责任:严格责任的适用性评估&以人工智能有限法人资格为例

Louisa McDonald
{"title":"人工智能系统与责任:严格责任的适用性评估&以人工智能有限法人资格为例","authors":"Louisa McDonald","doi":"10.15664/stalj.v3i1.2645","DOIUrl":null,"url":null,"abstract":"Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about whether conventional liability laws can be applicable to AI systems which manifest a high degree of autonomy. Users and developers of such AI systems may meet neither the epistemic (sufficient degree of awareness of what is happening) nor control (control over the actions performed) conditions of personal responsibility for the actions of the system at hand, and therefore, conventional liability schemes may seem to be inapplicable[1]. \nThe recently adopted AI Liability Directive [2022] has sought to adapt EU law to the challenges to conventional liability schemes posed by AI systems by imposing a system of strict, rather than fault-based liability, for AI systems. The goal of this is to be able to more easily hold developers, producers, and users of AI technologies accountable, requiring them to explain how AI systems were built and trained. The Directive aims to make it easier for people and companies harmed by AI systems to sue those responsible for the AI systems for damages. However, the Directive seems to ignore the potential injustice that could result from producers and developers being held accountable for actions caused by AI systems which they are neither aware of nor have sufficient control over. \n In this essay, I will critically assess the Directive’s system of fault-based liability for AI systems and argue that, whilst such a system may confer some instrumental advantages on behalf of those suing for damages caused by AI systems, it risks causing injustice on the part of developers and producers by making them liable for events they could neither control nor predict. This is likely to risk both producing unjust outcomes and hindering progress in AI development. Instead, following Visa Kurki’s analysis of legal personhood as a cluster concept divided into passive and active incidents, I will argue that some AI systems ought to be granted a limited form of legal personhood, because they meet some of the relevant criteria for active legal personhood, such as the capacity to perform acts-in-the-law. The legal personhood I propose for AI systems is a kind of dependent legal personhood analogous to that granted to corporations. Such a form of legal personhood would not absolve developers and producers from liability for damages (where such liability is applicable), but at the same time, it would not risk unjustly holding producers and developers liable for actions of an AI system. \n[1] Mark Coeckelbergh, \"Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.\" Science and Engineering Ethics, (2020): 2054 ","PeriodicalId":292385,"journal":{"name":"St Andrews Law Journal","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI Systems and Liability: An Assessment of the Applicability of Strict Liability & A Case for Limited Legal Personhood for AI\",\"authors\":\"Louisa McDonald\",\"doi\":\"10.15664/stalj.v3i1.2645\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about whether conventional liability laws can be applicable to AI systems which manifest a high degree of autonomy. Users and developers of such AI systems may meet neither the epistemic (sufficient degree of awareness of what is happening) nor control (control over the actions performed) conditions of personal responsibility for the actions of the system at hand, and therefore, conventional liability schemes may seem to be inapplicable[1]. \\nThe recently adopted AI Liability Directive [2022] has sought to adapt EU law to the challenges to conventional liability schemes posed by AI systems by imposing a system of strict, rather than fault-based liability, for AI systems. The goal of this is to be able to more easily hold developers, producers, and users of AI technologies accountable, requiring them to explain how AI systems were built and trained. The Directive aims to make it easier for people and companies harmed by AI systems to sue those responsible for the AI systems for damages. However, the Directive seems to ignore the potential injustice that could result from producers and developers being held accountable for actions caused by AI systems which they are neither aware of nor have sufficient control over. \\n In this essay, I will critically assess the Directive’s system of fault-based liability for AI systems and argue that, whilst such a system may confer some instrumental advantages on behalf of those suing for damages caused by AI systems, it risks causing injustice on the part of developers and producers by making them liable for events they could neither control nor predict. This is likely to risk both producing unjust outcomes and hindering progress in AI development. Instead, following Visa Kurki’s analysis of legal personhood as a cluster concept divided into passive and active incidents, I will argue that some AI systems ought to be granted a limited form of legal personhood, because they meet some of the relevant criteria for active legal personhood, such as the capacity to perform acts-in-the-law. The legal personhood I propose for AI systems is a kind of dependent legal personhood analogous to that granted to corporations. Such a form of legal personhood would not absolve developers and producers from liability for damages (where such liability is applicable), but at the same time, it would not risk unjustly holding producers and developers liable for actions of an AI system. \\n[1] Mark Coeckelbergh, \\\"Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.\\\" Science and Engineering Ethics, (2020): 2054 \",\"PeriodicalId\":292385,\"journal\":{\"name\":\"St Andrews Law Journal\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"St Andrews Law Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15664/stalj.v3i1.2645\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"St Andrews Law Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15664/stalj.v3i1.2645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)和机器学习的最新进展引发了关于传统责任法是否适用于表现出高度自治的人工智能系统的讨论。这种人工智能系统的用户和开发者可能既不满足对手头系统行为负责的认知(对正在发生的事情的足够程度的认识)条件,也不满足控制(对所执行的行为的控制)条件,因此,传统的责任方案似乎不适用[1]。最近通过的人工智能责任指令[2022]试图通过对人工智能系统实施严格的责任制度,而不是基于故障的责任制度,使欧盟法律适应人工智能系统对传统责任计划带来的挑战。这样做的目标是能够更容易地让人工智能技术的开发者、生产者和用户承担责任,要求他们解释人工智能系统是如何构建和训练的。该指令旨在使受到人工智能系统损害的个人和企业更容易起诉人工智能系统的责任人,要求赔偿损失。然而,该指令似乎忽略了潜在的不公正,即生产者和开发者被要求对他们既没有意识到也没有足够的控制的人工智能系统造成的行为负责。在本文中,我将批判性地评估指令中基于AI系统的过错责任制度,并认为,虽然这样的制度可能会为那些起诉AI系统造成损害的人带来一些工具优势,但它可能会导致开发商和生产者对他们既无法控制也无法预测的事件承担责任,从而导致不公正。这可能会产生不公正的结果,并阻碍人工智能发展的进步。相反,根据Visa Kurki将法律人格作为一个集群概念划分为被动和主动事件的分析,我将认为一些人工智能系统应该被授予有限形式的法律人格,因为它们符合主动法律人格的一些相关标准,例如执行法律行为的能力。我为人工智能系统提出的法律人格是一种依赖的法律人格,类似于授予公司的法律人格。这种形式的法人资格并不能免除开发者和制作人的损害赔偿责任(如果这种责任是适用的),但与此同时,它也不会有不公正地让制作人和开发者为AI系统的行为承担责任的风险。[1]郭海峰,“人工智能、责任归因与可解释性的关系论证”,《科学》。科学与工程伦理,(2020):2054
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI Systems and Liability: An Assessment of the Applicability of Strict Liability & A Case for Limited Legal Personhood for AI
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about whether conventional liability laws can be applicable to AI systems which manifest a high degree of autonomy. Users and developers of such AI systems may meet neither the epistemic (sufficient degree of awareness of what is happening) nor control (control over the actions performed) conditions of personal responsibility for the actions of the system at hand, and therefore, conventional liability schemes may seem to be inapplicable[1]. The recently adopted AI Liability Directive [2022] has sought to adapt EU law to the challenges to conventional liability schemes posed by AI systems by imposing a system of strict, rather than fault-based liability, for AI systems. The goal of this is to be able to more easily hold developers, producers, and users of AI technologies accountable, requiring them to explain how AI systems were built and trained. The Directive aims to make it easier for people and companies harmed by AI systems to sue those responsible for the AI systems for damages. However, the Directive seems to ignore the potential injustice that could result from producers and developers being held accountable for actions caused by AI systems which they are neither aware of nor have sufficient control over.  In this essay, I will critically assess the Directive’s system of fault-based liability for AI systems and argue that, whilst such a system may confer some instrumental advantages on behalf of those suing for damages caused by AI systems, it risks causing injustice on the part of developers and producers by making them liable for events they could neither control nor predict. This is likely to risk both producing unjust outcomes and hindering progress in AI development. Instead, following Visa Kurki’s analysis of legal personhood as a cluster concept divided into passive and active incidents, I will argue that some AI systems ought to be granted a limited form of legal personhood, because they meet some of the relevant criteria for active legal personhood, such as the capacity to perform acts-in-the-law. The legal personhood I propose for AI systems is a kind of dependent legal personhood analogous to that granted to corporations. Such a form of legal personhood would not absolve developers and producers from liability for damages (where such liability is applicable), but at the same time, it would not risk unjustly holding producers and developers liable for actions of an AI system. [1] Mark Coeckelbergh, "Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability." Science and Engineering Ethics, (2020): 2054 
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信