{"title":"道德能动性、道德责任和人工制品:现有的人工制品不能实现什么(以及为什么),以及为什么它们能够(并且能够做到!)对我们提出道德要求","authors":"Joel Parthemore, Blay Whitby","doi":"10.1142/S1793843014400162","DOIUrl":null,"url":null,"abstract":"This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: A \"private\" moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved \"minimal\" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Moral Agency, Moral Responsibility, and Artifacts : What Existing Artifacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims Upon Us\",\"authors\":\"Joel Parthemore, Blay Whitby\",\"doi\":\"10.1142/S1793843014400162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: A \\\"private\\\" moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved \\\"minimal\\\" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.\",\"PeriodicalId\":418022,\"journal\":{\"name\":\"International Journal of Machine Consciousness\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Machine Consciousness\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/S1793843014400162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Machine Consciousness","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S1793843014400162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Moral Agency, Moral Responsibility, and Artifacts : What Existing Artifacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims Upon Us
This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: A "private" moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved "minimal" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.