When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest
{"title":"When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest","authors":"Anja Bodenschatz","doi":"10.1016/j.chbah.2024.100097","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100097"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.