The Need for a Legal Standard of Care in the AI Environment

Q3 Social Sciences
Mohammad H. Bashayreh, Amer Tabbara, Fadi N. Sibai
{"title":"The Need for a Legal Standard of Care in the AI Environment","authors":"Mohammad H. Bashayreh, Amer Tabbara, Fadi N. Sibai","doi":"10.28946/slrev.vol7.iss1.1507.pp73-86","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has risen to legal debate over legal liability in-volved in an incident. An intelligent machine can learn through experience and adapts its decisions accordingly. As such, if an intelligent machine’s be-haviour causes harm, the developer and the machine's owner may argue that the autonomous nature of AI systems has broken the causal link. The diffi-culty of determining who is liable for a harmful behaviour of an AI system is accentuated by the fact that tracing back the decision-making process of an AI system is not always possible. This paper aims to put forward a definition of a duty of care for developers and users of AI systems that could be the basis for the investigation of liability while seeking predictability of the allo-cation of legal liability in many cases involving AI incidents. The paper ex-amines some guidelines on ethics for AI to discern essential elements of the duty of care in the AI environment. The paper argues that a uniform minimum standard of care should be adopted internationally through model laws or even an international convention. A uniform standard of care should be enforced by State control rather than self-regulation by the AI industry. A licensing or certification requirement for AI products should be implemented to verify that the elements of the duty of care have been satisfied to control AI production and import/export relations. Violation of the standard of care can be an objective ground to negate or allocate negligence, especially when verifying errors in the design of the relevant software or if explaining the AI system's behaviour is not possible. A clear standard of care would, this paper assumes, help promote AI development and use and would not create imped-iments to investment in AI production.","PeriodicalId":32073,"journal":{"name":"Sriwijaya Law Review","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sriwijaya Law Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.28946/slrev.vol7.iss1.1507.pp73-86","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) has risen to legal debate over legal liability in-volved in an incident. An intelligent machine can learn through experience and adapts its decisions accordingly. As such, if an intelligent machine’s be-haviour causes harm, the developer and the machine's owner may argue that the autonomous nature of AI systems has broken the causal link. The diffi-culty of determining who is liable for a harmful behaviour of an AI system is accentuated by the fact that tracing back the decision-making process of an AI system is not always possible. This paper aims to put forward a definition of a duty of care for developers and users of AI systems that could be the basis for the investigation of liability while seeking predictability of the allo-cation of legal liability in many cases involving AI incidents. The paper ex-amines some guidelines on ethics for AI to discern essential elements of the duty of care in the AI environment. The paper argues that a uniform minimum standard of care should be adopted internationally through model laws or even an international convention. A uniform standard of care should be enforced by State control rather than self-regulation by the AI industry. A licensing or certification requirement for AI products should be implemented to verify that the elements of the duty of care have been satisfied to control AI production and import/export relations. Violation of the standard of care can be an objective ground to negate or allocate negligence, especially when verifying errors in the design of the relevant software or if explaining the AI system's behaviour is not possible. A clear standard of care would, this paper assumes, help promote AI development and use and would not create imped-iments to investment in AI production.
在人工智能环境中需要一个法律标准的关注
人工智能(AI)引发了有关事件中法律责任的法律争论。智能机器可以从经验中学习,并相应地调整决策。因此,如果一台智能机器的行为造成了伤害,开发者和机器的所有者可能会辩称,人工智能系统的自治性质打破了因果关系。确定谁对AI系统的有害行为负责的难度,因为追溯AI系统的决策过程并不总是可行的。本文旨在对人工智能系统的开发者和用户提出注意义务的定义,这可以作为调查责任的基础,同时在涉及人工智能事件的许多案件中寻求法律责任分配的可预测性。本文探讨了人工智能的一些伦理准则,以辨别人工智能环境中注意义务的基本要素。本文认为,应通过示范法甚至国际公约在国际上采用统一的最低照顾标准。统一的护理标准应该由国家控制而不是由人工智能行业自我监管来执行。应实施人工智能产品的许可或认证规定,以核实已满足控制人工智能生产和进出口关系的注意义务要素。违反注意标准可以作为否定或分配疏忽的客观依据,特别是在验证相关软件设计中的错误或无法解释人工智能系统行为的情况下。本文认为,明确的护理标准将有助于促进人工智能的开发和使用,并且不会对人工智能生产的投资造成障碍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Sriwijaya Law Review
Sriwijaya Law Review Social Sciences-Law
CiteScore
1.00
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信