{"title":"A normative framework of artificial moral agents","authors":"Z. Gan","doi":"10.1109/istas52410.2021.9629179","DOIUrl":null,"url":null,"abstract":"This paper proposes a normative framework for designing and evaluating ethical machines, that is, artificial moral agents (AMAs). There are no systematic evaluations yet to compare ethical machines with similar functions or to validate if a machine is successful enough to achieve the goal defined by the research teams. The framework mainly includes conceptual analysis and nontechnical aspects of installing ethics into a machine with only a few suggestions in the technical dimension. In contrast to the mainstream action-centric models of AMAs, which unjustifiably implies the assumption of “ethics in ethics out,” I will propose an agent-based model of AMAs, which stresses how to design a pattern of a machine that will ensure replicating human ethical behaviors and possessing the abilities of learning, adjusting, and progressing on its own. In particular, the agent-based model of AMAs is able to make and adjust its decisions from a recipient-position with a second-person perspective which is essentially interpersonal. When an AMA uses the second-person perspective to make decisions, it means the machine should be responsive to the reactions and the impacts of humans in the circumstances. AMAs should judge synthetically from the ethical features in the situations instead of just following ethical principles or any ethical theory. Therefore, the agent-based model of AMAs does not have to prefer specific ethical theory which may also avoid the disadvantages of following specific ethical theories. AMAs should also be applied in the specific but not wide field because the second person perspective is often context-dependent. If more and more research teams incline to create domain-specific AMAs, the productions will contribute to establishing domain-specific evaluations to compare different systems of AMA. The proposed framework will deliver a message that not only the process of creating AMAs but also their decisions should be always human-in-the-loop. In short, the purpose of creating AMAs should contribute to making better ethical decisions for humans rather than replace people to make decisions.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/istas52410.2021.9629179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a normative framework for designing and evaluating ethical machines, that is, artificial moral agents (AMAs). There are no systematic evaluations yet to compare ethical machines with similar functions or to validate if a machine is successful enough to achieve the goal defined by the research teams. The framework mainly includes conceptual analysis and nontechnical aspects of installing ethics into a machine with only a few suggestions in the technical dimension. In contrast to the mainstream action-centric models of AMAs, which unjustifiably implies the assumption of “ethics in ethics out,” I will propose an agent-based model of AMAs, which stresses how to design a pattern of a machine that will ensure replicating human ethical behaviors and possessing the abilities of learning, adjusting, and progressing on its own. In particular, the agent-based model of AMAs is able to make and adjust its decisions from a recipient-position with a second-person perspective which is essentially interpersonal. When an AMA uses the second-person perspective to make decisions, it means the machine should be responsive to the reactions and the impacts of humans in the circumstances. AMAs should judge synthetically from the ethical features in the situations instead of just following ethical principles or any ethical theory. Therefore, the agent-based model of AMAs does not have to prefer specific ethical theory which may also avoid the disadvantages of following specific ethical theories. AMAs should also be applied in the specific but not wide field because the second person perspective is often context-dependent. If more and more research teams incline to create domain-specific AMAs, the productions will contribute to establishing domain-specific evaluations to compare different systems of AMA. The proposed framework will deliver a message that not only the process of creating AMAs but also their decisions should be always human-in-the-loop. In short, the purpose of creating AMAs should contribute to making better ethical decisions for humans rather than replace people to make decisions.