{"title":"当自身利益与 \"大局 \"对立时--涉及用户自身利益的自主系统伦理困境中的随机化决策","authors":"Anja Bodenschatz","doi":"10.1016/j.chbah.2024.100097","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100097"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest\",\"authors\":\"Anja Bodenschatz\",\"doi\":\"10.1016/j.chbah.2024.100097\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"2 2\",\"pages\":\"Article 100097\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882124000574\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
自主系统(AS)决定伦理困境,其人工智能和情境设置变得越来越复杂。然而,研究有关自动驾驶汽车(AV)事故的常识性道德抽象困境是一种常用工具。当自动驾驶汽车的用户受到影响时,就会出现特殊的道德困境。许多人希望自动驾驶汽车遵守功利主义程序(如拯救大群体)或平等主义程序(即平等对待每个人)。然而,他们希望自己的 AV 保护自己,而不是保护 "更大的利益"。人们拒绝作为自动机用户的功利主义编程,但又从公正的角度支持这一想法,这被称为 "自动机的社会困境"。与此同时,在涉及自身利益的困境中,人们对另一种可实现平等主义编程的技术能力--决策随机化--的偏好尚未被激发出来。本文研究了在人们是视听设备的唯一乘客,而他们的生存与其他几个人的生存息息相关的困境中,人们对自动驾驶汽车的自我保护、自我牺牲或随机选择的规范性和描述性偏好。结果表明,随机化可以在社会认可的程序和自动驾驶汽车用户自我保护的冲动之间架起一座桥梁,从而缓解 "自动驾驶汽车的社会困境"。
When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest
Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.