人工智能和机器人系统开发和应用中的民事责任:基本方法

IF 0.2 Q3 LAW
Y. Kharitonova, V. Savina, F. Pagnini
{"title":"人工智能和机器人系统开发和应用中的民事责任:基本方法","authors":"Y. Kharitonova, V. Savina, F. Pagnini","doi":"10.17072/1995-4190-2022-58-683-708","DOIUrl":null,"url":null,"abstract":"Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.","PeriodicalId":42087,"journal":{"name":"Vestnik Permskogo Universiteta-Juridicheskie Nauki","volume":"58 1","pages":""},"PeriodicalIF":0.2000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"CIVIL LIABILITY IN THE DEVELOPMENT AND APPLICATION OF ARTIFICIAL INTELLIGENCE AND ROBOTIC SYSTEMS: BASIC APPROACHES\",\"authors\":\"Y. Kharitonova, V. Savina, F. Pagnini\",\"doi\":\"10.17072/1995-4190-2022-58-683-708\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.\",\"PeriodicalId\":42087,\"journal\":{\"name\":\"Vestnik Permskogo Universiteta-Juridicheskie Nauki\",\"volume\":\"58 1\",\"pages\":\"\"},\"PeriodicalIF\":0.2000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Vestnik Permskogo Universiteta-Juridicheskie Nauki\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17072/1995-4190-2022-58-683-708\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vestnik Permskogo Universiteta-Juridicheskie Nauki","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17072/1995-4190-2022-58-683-708","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"LAW","Score":null,"Total":0}
引用次数: 2

摘要

导言:在研究人工智能系统应用中的安全性和充分性相关法律问题时,不可能不提出与使用人工智能系统相关的责任问题。本文主要研究人工智能和机器人系统造成损害的民事责任问题。技术进步需要修订许多立法机制,以维持和鼓励创新工业的进一步发展,同时确保人工智能应用的安全性。重要的是,不仅要应对当前的挑战,而且要展望未来,并根据短期预测制定新的规则。不再有任何理由断然声称,与早先的信念相反,管理法律责任机构的规则将绝对不需要根本的改变。这是由于AIS的自主性越来越强,其可能的应用范围也在扩大。人工智能通常应用于创意产业、人类活动不同领域的决策、无人驾驶交通等领域。但是,在AIS造成损害的情况下,仍有一些重大问题尚未解决,这些问题涉及应负赔偿责任的当事方、适用无过错责任机制的可行性、对这种关系的适当管理程度;关于这些问题的讨论远未结束。目的:在分析俄罗斯和其他国家的理论概念和立法的基础上,对使用人工智能的情况下的民事法律规制和侵权责任进行展望。方法:采用比较、描述、解释的经验方法;形式逻辑与辩证逻辑的理论方法;特殊的科学方法:法律教条法和法律规范解释法。结果:关于AIS所有者和用户的责任存在相当大的争议。在许多国家,人工智能的道德规范被接受。然而,需要的是法律法规,例如,考虑到AIS是增加危险的来源;在相关法律规范缺失的情况下,运用基于法律类比的侵权责任机制是合理的。这一领域的标准化(数据库、软件、基础设施等的标准化)也很重要,因为它可以确定AIS的开发者和运营商要承担责任;违反标准化要求也可能成为追究其民事责任的理由。传统的法律概念,如损害主体、损害客体和造成损害的一方,在涉及合同责任和非合同责任时,出现了新的维度。结论:研究表明,目前各国立法对AIS造成损害的责任规定较为宽松。但是,现在是逐步从制定战略转向采取实际步骤的时候了,以便建立有效的机制,以尽量减少伤害的危险,而不追究任何人的责任。由于AIS的开发过程涉及许多具有独立法律地位的参与者(数据提供者、开发者、制造商、程序员、设计者、用户),一旦出现问题,很难确定责任方,必须考虑许多因素。在对第三方造成伤害方面,将AIS视为增加危险的来源似乎是合乎逻辑和合理的;在相关法律规定不完善的情况下,采用法律类比的侵权责任机制较为合理。合同责任模式要求制定共同的方法来界定产品和违反合同条款的后果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CIVIL LIABILITY IN THE DEVELOPMENT AND APPLICATION OF ARTIFICIAL INTELLIGENCE AND ROBOTIC SYSTEMS: BASIC APPROACHES
Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
50.00%
发文量
7
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信