{"title":"Robot – sodelavec ali stroj?","authors":"Andreja Primec","doi":"10.18690/978-961-286-366-1.3","DOIUrl":null,"url":null,"abstract":"The recent crashes of Boeing 737 max 8 aircraft have revealed numerous irregularities in the management of artificial intelligence systems. Given that the aeronautical sector is subject to particularly stringent security standards and controls, it may be a warning that artificial intelligence systems need to be accelerated in the future, not only in the technology but also in the legal field, by establishing an appropriate legislative framework that will ensure the safe use of all forms of artificial intelligence systems, while development. The Commission has carried out an evaluation of the Directive on liability for defective products, which shows that it is still an appropriate tool. Therefore, the principle of strict liability remains intact. However, due to the increasingly powerful artificial intelligence systems and their autonomous decision-making capabilities, we need to think ahead. At this moment, it is impossible to predict whether development is going towards shared responsibility or autonomous responsibility. It is indisputable that behind every artificial intelligence system, there is a human, a creator, who, because of the effects of artificial intelligence on the whole of society, must, in his work, proceed from ethical principles and respect for fundamental rights.","PeriodicalId":142514,"journal":{"name":"Pravo in ekonomija: Digitalno gospodarstvo","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pravo in ekonomija: Digitalno gospodarstvo","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18690/978-961-286-366-1.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
最近的波音737 max 8飞机坠毁事件暴露了人工智能系统管理中的诸多违规行为。鉴于航空部门受到特别严格的安全标准和控制,这可能是一个警告,即人工智能系统需要在未来加速发展,不仅在技术方面,而且在法律领域,通过建立适当的立法框架,确保在发展过程中安全使用各种形式的人工智能系统。委员会对缺陷产品责任指令进行了评估,结果表明它仍然是一个适当的工具。因此,严格责任原则保持不变。然而,由于越来越强大的人工智能系统及其自主决策能力,我们需要提前考虑。目前还无法预测,发展的方向是共同责任还是自主责任。毫无疑问,每个人工智能系统的背后都有一个人,一个创造者,由于人工智能对整个社会的影响,他在工作中必须从道德原则和对基本权利的尊重出发。
The recent crashes of Boeing 737 max 8 aircraft have revealed numerous irregularities in the management of artificial intelligence systems. Given that the aeronautical sector is subject to particularly stringent security standards and controls, it may be a warning that artificial intelligence systems need to be accelerated in the future, not only in the technology but also in the legal field, by establishing an appropriate legislative framework that will ensure the safe use of all forms of artificial intelligence systems, while development. The Commission has carried out an evaluation of the Directive on liability for defective products, which shows that it is still an appropriate tool. Therefore, the principle of strict liability remains intact. However, due to the increasingly powerful artificial intelligence systems and their autonomous decision-making capabilities, we need to think ahead. At this moment, it is impossible to predict whether development is going towards shared responsibility or autonomous responsibility. It is indisputable that behind every artificial intelligence system, there is a human, a creator, who, because of the effects of artificial intelligence on the whole of society, must, in his work, proceed from ethical principles and respect for fundamental rights.