Ethics outside the box: empirical tools for an ethics of artificial agents

Peter Danielson
{"title":"Ethics outside the box: empirical tools for an ethics of artificial agents","authors":"Peter Danielson","doi":"10.1145/2465449.2493383","DOIUrl":null,"url":null,"abstract":"Software introduces new kinds of agents: artificial software agents (ASA), including, for example, driverless trains and cars. To create these devices responsibility, engineers need an ethics of software agency. However, this pragmatic professional need for guidance and regulation conflicts with the weakness of moral science. We do not know much about how ethics informs interactions with artificial agents. Most importantly, we don't know how people will regard ASA as agents: their agents (strictly speaking) and also their competitive and cooperative partners. Naturally, we want to deal with these new problems with our old ethical tools, but this conservative strategy may not work, and if not, may lead to catastrophic failure to anticipate the emerging moral landscape. (Just ask the creators of genetically modified foods.)\n 1. This lecture will look at the box or frame of traditional ethics and some ways to use experimental data to get outside it. The lecture uses some quick and nasty clicker experiments to point us to disturbing evidence from recent cognitive moral psychology about the form and content of our ethical apparatus (Haidt 2012) and its universality (Mikhail 2007). Then we turn to some new evidence on the ethics of human-ASA interaction. We focus on three surprising features of human-ASA interaction that disturb received ethical paradigms: 1) Overactive deontology: the tendency to seek out a culprit to blame, even it it's the victim. 2) Utopian consequentialism: denying the constraints of acting in the imperfect real world by shifting to wishful perfectionism. 3) Embracing mechanical exploitation: accepting worse behavior from a program than one would accept from a person in Ultimatum Game experiments.\n 2. Next, we show how an experimental, cognitive and game theoretic approach to ethics can situate and explain these problems. We play some games based on policy decisions for the emerging technology of driverless cars that remind us of the strategic dimension of ethics. We also examine weak experimental evidence that engineers think differently about ethics and technology from other moral tribes or types.\n 3. However we argue that theory cannot solve our ethical problems. Neither ethical nor game theory has resources powerful enough to discover and hopefully to bridge our moralized divisions. For these formidable, scientific and political (respectively) tasks we need new empirical methods. We offer two examples from our current research program: 1) Anonymous input of moral and value data: clickers for face-to-face interaction. 2) Democratic scale deliberation: N-Reasons web based experimental prototype. Both of these methods challenge our research ethics, which experimental ethics shares with experimental software engineering.\n As some of the data discussed in the lecture comes from the Robot Ethics survey, you will be better informed and represented if you visit http://your-views.org/D7/Robot_Ethics_Welcome. The \"class\" for the conference is \"CompArch\".","PeriodicalId":399536,"journal":{"name":"International Symposium on Component-Based Software Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Component-Based Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2465449.2493383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Software introduces new kinds of agents: artificial software agents (ASA), including, for example, driverless trains and cars. To create these devices responsibility, engineers need an ethics of software agency. However, this pragmatic professional need for guidance and regulation conflicts with the weakness of moral science. We do not know much about how ethics informs interactions with artificial agents. Most importantly, we don't know how people will regard ASA as agents: their agents (strictly speaking) and also their competitive and cooperative partners. Naturally, we want to deal with these new problems with our old ethical tools, but this conservative strategy may not work, and if not, may lead to catastrophic failure to anticipate the emerging moral landscape. (Just ask the creators of genetically modified foods.) 1. This lecture will look at the box or frame of traditional ethics and some ways to use experimental data to get outside it. The lecture uses some quick and nasty clicker experiments to point us to disturbing evidence from recent cognitive moral psychology about the form and content of our ethical apparatus (Haidt 2012) and its universality (Mikhail 2007). Then we turn to some new evidence on the ethics of human-ASA interaction. We focus on three surprising features of human-ASA interaction that disturb received ethical paradigms: 1) Overactive deontology: the tendency to seek out a culprit to blame, even it it's the victim. 2) Utopian consequentialism: denying the constraints of acting in the imperfect real world by shifting to wishful perfectionism. 3) Embracing mechanical exploitation: accepting worse behavior from a program than one would accept from a person in Ultimatum Game experiments. 2. Next, we show how an experimental, cognitive and game theoretic approach to ethics can situate and explain these problems. We play some games based on policy decisions for the emerging technology of driverless cars that remind us of the strategic dimension of ethics. We also examine weak experimental evidence that engineers think differently about ethics and technology from other moral tribes or types. 3. However we argue that theory cannot solve our ethical problems. Neither ethical nor game theory has resources powerful enough to discover and hopefully to bridge our moralized divisions. For these formidable, scientific and political (respectively) tasks we need new empirical methods. We offer two examples from our current research program: 1) Anonymous input of moral and value data: clickers for face-to-face interaction. 2) Democratic scale deliberation: N-Reasons web based experimental prototype. Both of these methods challenge our research ethics, which experimental ethics shares with experimental software engineering. As some of the data discussed in the lecture comes from the Robot Ethics survey, you will be better informed and represented if you visit http://your-views.org/D7/Robot_Ethics_Welcome. The "class" for the conference is "CompArch".
框框外的伦理:人工主体伦理的经验工具
软件引入了新型代理:人工软件代理(ASA),包括无人驾驶火车和无人驾驶汽车。要创建这些设备的责任,工程师需要一个道德的软件代理。然而,这种对指导和监管的务实的专业需求与道德科学的弱点相冲突。我们不太了解道德规范如何影响与人工智能体的互动。最重要的是,我们不知道人们会如何看待ASA作为代理人:他们的代理人(严格来说),也是他们的竞争伙伴和合作伙伴。自然地,我们希望用旧的道德工具来处理这些新问题,但这种保守的策略可能行不通,如果行不通,可能会导致灾难性的失败,无法预测正在出现的道德景观。(问问转基因食品的创造者就知道了。)这堂课将探讨传统伦理的框框以及一些利用实验数据跳出框框的方法。讲座使用了一些快速而令人讨厌的点击实验,向我们指出了来自最近认知道德心理学的令人不安的证据,这些证据涉及我们的道德机器的形式和内容(Haidt 2012)及其普遍性(Mikhail 2007)。然后,我们转向一些关于人类与asa互动伦理的新证据。我们关注人类与asa互动的三个令人惊讶的特征,这些特征扰乱了公认的伦理范式:1)过度活跃的义务论:倾向于寻找罪魁祸首,即使它是受害者。2)乌托邦结果主义:否认在不完美的现实世界中行动的约束,转向一厢情愿的完美主义。3)接受机制利用:接受程序的糟糕行为,而不是在《最后通牒游戏》实验中接受人的行为。2. 接下来,我们将展示一个实验的、认知的和博弈论的伦理方法是如何定位和解释这些问题的。我们玩一些基于无人驾驶汽车新兴技术的政策决策的游戏,这让我们想起了伦理的战略维度。我们还研究了薄弱的实验证据,证明工程师对道德和技术的看法与其他道德部落或类型不同。3.然而,我们认为理论不能解决我们的伦理问题。伦理学和博弈论都没有足够强大的资源来发现并弥合我们道德上的分歧。对于这些(分别)艰巨的科学和政治任务,我们需要新的经验方法。我们提供了目前研究项目中的两个例子:1)道德和价值数据的匿名输入:面对面互动的点击器。2)民主规模审议:基于N-Reasons web的实验原型。这两种方法都挑战了我们的研究伦理,实验伦理与实验软件工程共有。由于讲座中讨论的一些数据来自机器人伦理调查,如果您访问http://your-views.org/D7/Robot_Ethics_Welcome,您将更好地了解和代表。会议的“类”是“CompArch”。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信