The New Subjects of Law – Are Artificial Intelligence Systems Already among Us?

IF 0.1 Q4 LAW
De Jure Pub Date : 2022-12-21 DOI:10.54664/brem6290
Diana Kovacheva
{"title":"The New Subjects of Law – Are Artificial Intelligence Systems Already among Us?","authors":"Diana Kovacheva","doi":"10.54664/brem6290","DOIUrl":null,"url":null,"abstract":"The study explores the issue of legal personality and liability of artificial intelligence (AI) systems. A real AI should have a will and self-awareness, but, at this point, there are mainly systems with a collective “cloud” intelligence that is located outside of them, supported by people (Sofia, the chatbot Miraya, the chatbot Tai, the xenobots). It is important to be clear about the fact whether robots are still only a “means”, a “tool” that facilitates human life, or whether they already have qualities that make them independent entities. Currently, AI systems are treated as objects of law. Granting legal personality similar to that of legal entities is not a solution as well because of their specific nature. If, in the future, intelligent systems become independent and emancipated from the human beings that created them, they could be considered a new specific subject – a legal person sui generis. The regulatory framework of international organizations in this area already places robots in the category of “electronic person” (EU) and binds their legal status to the protection of basic human rights. At this point, a number of practical issues are yet to be resolved – identifiability, establishment of a register, and up-to-dateness of the data in it. The possible granting of legal personality to AI systems, even specific or limited one, raises the question of the rights of robots themselves (procedural legal capacity, property rights, labour rights, tax legal personality), as well as of the responsibility for damages and their compensation. One of the most important issues in the development of intelligent machines is the extent to which we should allow them to make autonomous or automated decisions. Algorithms, which are initially set and related to the protection of fundamental human rights, should be stable, or “locked” for changes by artificial intelligence systems in the context of their improvement and self-learning. The issue of human control is important, especially in cases where decisions might affect human life, health, and social support. The rapid development of digital technologies should make us think about a future in which AI systems can deviate so much from the basic algorithms set by humans that joint and individual financial liability can be reached. The theory also discusses the issue of the applicability of criminal liability to robots.","PeriodicalId":41915,"journal":{"name":"De Jure","volume":"45 1","pages":""},"PeriodicalIF":0.1000,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"De Jure","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54664/brem6290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

The study explores the issue of legal personality and liability of artificial intelligence (AI) systems. A real AI should have a will and self-awareness, but, at this point, there are mainly systems with a collective “cloud” intelligence that is located outside of them, supported by people (Sofia, the chatbot Miraya, the chatbot Tai, the xenobots). It is important to be clear about the fact whether robots are still only a “means”, a “tool” that facilitates human life, or whether they already have qualities that make them independent entities. Currently, AI systems are treated as objects of law. Granting legal personality similar to that of legal entities is not a solution as well because of their specific nature. If, in the future, intelligent systems become independent and emancipated from the human beings that created them, they could be considered a new specific subject – a legal person sui generis. The regulatory framework of international organizations in this area already places robots in the category of “electronic person” (EU) and binds their legal status to the protection of basic human rights. At this point, a number of practical issues are yet to be resolved – identifiability, establishment of a register, and up-to-dateness of the data in it. The possible granting of legal personality to AI systems, even specific or limited one, raises the question of the rights of robots themselves (procedural legal capacity, property rights, labour rights, tax legal personality), as well as of the responsibility for damages and their compensation. One of the most important issues in the development of intelligent machines is the extent to which we should allow them to make autonomous or automated decisions. Algorithms, which are initially set and related to the protection of fundamental human rights, should be stable, or “locked” for changes by artificial intelligence systems in the context of their improvement and self-learning. The issue of human control is important, especially in cases where decisions might affect human life, health, and social support. The rapid development of digital technologies should make us think about a future in which AI systems can deviate so much from the basic algorithms set by humans that joint and individual financial liability can be reached. The theory also discusses the issue of the applicability of criminal liability to robots.
法律的新主体——人工智能系统已经在我们身边了吗?
该研究探讨了人工智能(AI)系统的法律人格和责任问题。一个真正的人工智能应该有意志和自我意识,但是,在这一点上,主要是有一个集体“云”智能的系统位于它们之外,由人类支持(索菲亚,聊天机器人Miraya,聊天机器人Tai, xenobots)。重要的是要弄清楚机器人是否仍然只是一种“手段”,一种促进人类生活的“工具”,或者它们是否已经具备了使它们成为独立实体的品质。目前,人工智能系统被视为法律的对象。由于法人的特殊性,赋予法人类似的法人资格也不是一种解决办法。如果在未来,智能系统变得独立并从创造它们的人类中解放出来,它们可以被视为一个新的特定主体——一个自属法人。国际组织在这一领域的监管框架已经将机器人置于“电子人”(欧盟)的范畴,并将其法律地位与保护基本人权联系起来。在这一点上,一些实际问题还有待解决——可识别性、建立登记册以及登记册中数据的最新性。赋予人工智能系统法人资格的可能性,即使是特定的或有限的,也会引发机器人本身权利的问题(程序法律行为能力、财产权、劳工权利、税务法人资格),以及损害赔偿责任和赔偿责任。智能机器发展中最重要的问题之一是我们应该允许它们在多大程度上做出自主或自动化的决定。算法最初设定并与保护基本人权有关,在人工智能系统改进和自我学习的背景下,算法应该是稳定的,或者“锁定”人工智能系统的变化。人的控制问题很重要,特别是在决策可能影响人的生命、健康和社会支持的情况下。数字技术的快速发展应该让我们思考这样一个未来:人工智能系统可能会大大偏离人类设定的基本算法,以至于可能会产生共同和个人的财务责任。该理论还讨论了刑事责任对机器人的适用性问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
De Jure
De Jure LAW-
自引率
0.00%
发文量
11
审稿时长
4 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信