How guilty is a robot who kills other robots?

O. Parlangeli, Stefano Guidi, E. Marchigiani, P. Palmitesta, A. Andreadis, S. Roncato
{"title":"How guilty is a robot who kills other robots?","authors":"O. Parlangeli, Stefano Guidi, E. Marchigiani, P. Palmitesta, A. Andreadis, S. Roncato","doi":"10.1109/IISA50023.2020.9284338","DOIUrl":null,"url":null,"abstract":"Safety may depends crucially on making moral judgments. To date we have a lack of knowledge about the possibility of intervening in the processes that lead to moral judgments in relation to the behavior of artificial agents. The study reported here involved 293 students from the University of Siena who made moral judgments after reading the description of an event in which a person or robot killed other people or robots. The study was conducted through an online questionnaire. The results suggest that moral judgments essentially depend on the type of victim and that are different if they involve human or artificial agents. Furthermore, some characteristics of the evaluators, such as the greater or lesser disposition to attribute mental states to artificial agents, have an influence on these evaluations. On the other hand, the level of familiarity with these systems seems to have a limited effect.","PeriodicalId":109238,"journal":{"name":"2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA50023.2020.9284338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Safety may depends crucially on making moral judgments. To date we have a lack of knowledge about the possibility of intervening in the processes that lead to moral judgments in relation to the behavior of artificial agents. The study reported here involved 293 students from the University of Siena who made moral judgments after reading the description of an event in which a person or robot killed other people or robots. The study was conducted through an online questionnaire. The results suggest that moral judgments essentially depend on the type of victim and that are different if they involve human or artificial agents. Furthermore, some characteristics of the evaluators, such as the greater or lesser disposition to attribute mental states to artificial agents, have an influence on these evaluations. On the other hand, the level of familiarity with these systems seems to have a limited effect.
一个杀死其他机器人的机器人有多有罪?
安全可能在很大程度上取决于道德判断。迄今为止,我们对干预导致与人工主体行为相关的道德判断的过程的可能性缺乏了解。这里报道的这项研究涉及来自锡耶纳大学的293名学生,他们在阅读一个人或机器人杀死其他人或机器人的事件描述后做出道德判断。这项研究是通过在线问卷进行的。结果表明,道德判断本质上取决于受害者的类型,如果涉及人类或人工代理人,则会有所不同。此外,评估者的某些特征,如将心理状态归因于人工代理的倾向或多或少,对这些评估有影响。另一方面,对这些系统的熟悉程度似乎影响有限。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信