Do moral robots always fail? Investigating human attitudes towards ethical decisions of automated systems

Philipp Wintersberger, Anna-Katharina Frison, A. Riener, Shailie Thakkar
{"title":"Do moral robots always fail? Investigating human attitudes towards ethical decisions of automated systems","authors":"Philipp Wintersberger, Anna-Katharina Frison, A. Riener, Shailie Thakkar","doi":"10.1109/ROMAN.2017.8172493","DOIUrl":null,"url":null,"abstract":"Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems — relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance — but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2017.8172493","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems — relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance — but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.
道德机器人总是失败吗?调查人类对自动化系统的道德决策的态度
技术进步将很快使自动化系统(如车辆或搜索和救援无人机)接管人类执行的任务成为可能。然而,与这些系统交互的将是人类——依赖系统的决策将需要对机器人/机器及其算法的信任。信任研究有着悠久的历史。信任的一个方面,即伦理或道德上可接受的决定,迄今尚未受到太多关注。人类不断地面临着基于个人价值体系和直觉的道德决策。为了让人们能够信任一个系统,它必须具有被广泛接受的道德能力。尽管一些研究表明,在关键情况下,人们更倾向于功利主义的决定,例如,当一个决定需要偏袒一个人而不是另一个人时,这种做法将违反法律和国际人权,因为个人不能按个人特征进行排名或分类。解决这一困境的一个办法是随机决策——但是系统用户的接受度如何呢?为了找出人类在道德模糊的情况下是否接受随机决策,我们进行了一项在线调查,受试者必须评估他们在不同情况下对道德算法决策的个人态度。我们的结果(n=330)表明,尽管更多的受访者表示更喜欢基于道德规则的决策,但随机化被认为是最公正和道德正确的,因此可能会在其他客观参数相等的情况下推动决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信