A normative framework of artificial moral agents

Z. Gan
{"title":"A normative framework of artificial moral agents","authors":"Z. Gan","doi":"10.1109/istas52410.2021.9629179","DOIUrl":null,"url":null,"abstract":"This paper proposes a normative framework for designing and evaluating ethical machines, that is, artificial moral agents (AMAs). There are no systematic evaluations yet to compare ethical machines with similar functions or to validate if a machine is successful enough to achieve the goal defined by the research teams. The framework mainly includes conceptual analysis and nontechnical aspects of installing ethics into a machine with only a few suggestions in the technical dimension. In contrast to the mainstream action-centric models of AMAs, which unjustifiably implies the assumption of “ethics in ethics out,” I will propose an agent-based model of AMAs, which stresses how to design a pattern of a machine that will ensure replicating human ethical behaviors and possessing the abilities of learning, adjusting, and progressing on its own. In particular, the agent-based model of AMAs is able to make and adjust its decisions from a recipient-position with a second-person perspective which is essentially interpersonal. When an AMA uses the second-person perspective to make decisions, it means the machine should be responsive to the reactions and the impacts of humans in the circumstances. AMAs should judge synthetically from the ethical features in the situations instead of just following ethical principles or any ethical theory. Therefore, the agent-based model of AMAs does not have to prefer specific ethical theory which may also avoid the disadvantages of following specific ethical theories. AMAs should also be applied in the specific but not wide field because the second person perspective is often context-dependent. If more and more research teams incline to create domain-specific AMAs, the productions will contribute to establishing domain-specific evaluations to compare different systems of AMA. The proposed framework will deliver a message that not only the process of creating AMAs but also their decisions should be always human-in-the-loop. In short, the purpose of creating AMAs should contribute to making better ethical decisions for humans rather than replace people to make decisions.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/istas52410.2021.9629179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes a normative framework for designing and evaluating ethical machines, that is, artificial moral agents (AMAs). There are no systematic evaluations yet to compare ethical machines with similar functions or to validate if a machine is successful enough to achieve the goal defined by the research teams. The framework mainly includes conceptual analysis and nontechnical aspects of installing ethics into a machine with only a few suggestions in the technical dimension. In contrast to the mainstream action-centric models of AMAs, which unjustifiably implies the assumption of “ethics in ethics out,” I will propose an agent-based model of AMAs, which stresses how to design a pattern of a machine that will ensure replicating human ethical behaviors and possessing the abilities of learning, adjusting, and progressing on its own. In particular, the agent-based model of AMAs is able to make and adjust its decisions from a recipient-position with a second-person perspective which is essentially interpersonal. When an AMA uses the second-person perspective to make decisions, it means the machine should be responsive to the reactions and the impacts of humans in the circumstances. AMAs should judge synthetically from the ethical features in the situations instead of just following ethical principles or any ethical theory. Therefore, the agent-based model of AMAs does not have to prefer specific ethical theory which may also avoid the disadvantages of following specific ethical theories. AMAs should also be applied in the specific but not wide field because the second person perspective is often context-dependent. If more and more research teams incline to create domain-specific AMAs, the productions will contribute to establishing domain-specific evaluations to compare different systems of AMA. The proposed framework will deliver a message that not only the process of creating AMAs but also their decisions should be always human-in-the-loop. In short, the purpose of creating AMAs should contribute to making better ethical decisions for humans rather than replace people to make decisions.
人工道德主体的规范框架
本文提出了一个设计和评价伦理机器的规范框架,即人工道德行为人(AMAs)。目前还没有系统的评估来比较具有类似功能的道德机器,或者验证机器是否足够成功地实现研究团队定义的目标。该框架主要包括概念分析和将伦理安装到机器中的非技术方面,仅在技术层面提出了一些建议。与主流的以行动为中心的人工智能模型不同,这种模型不合理地暗示了“伦理在伦理之外”的假设,我将提出一个基于主体的人工智能模型,该模型强调如何设计一种机器模式,以确保复制人类的道德行为,并拥有自主学习、调整和进步的能力。特别是,基于主体的人工智能模型能够从接受者的角度以第二人称的视角做出和调整其决策,这本质上是人际关系。当AMA使用第二人称视角来做决定时,这意味着机器应该对人类在环境中的反应和影响做出反应。自治团体不应只遵循伦理原则或伦理理论,而应从情境的伦理特征出发进行综合判断。因此,基于agent的人工智能模型不必选择特定的伦理理论,也可以避免遵循特定伦理理论的弊端。第二人称视角也应该应用于特定但不广泛的领域,因为第二人称视角往往依赖于上下文。如果越来越多的研究团队倾向于创建特定领域的AMA,其成果将有助于建立特定领域的评估,以比较不同的AMA系统。拟议的框架将传递一个信息,即不仅创建asa的过程,而且它们的决策也应该始终由人在循环中。简而言之,创建人工智能的目的应该是为人类做出更好的道德决策,而不是取代人类做出决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信