{"title":"惩罚机器人——麻雀责任归属问题的解决之道","authors":"M. Zając","doi":"10.1080/15027570.2020.1865455","DOIUrl":null,"url":null,"abstract":"ABSTRACT The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon Systems (LAWS) is problematic. Robert Sparrow argues that if specific agents cannot be fairly and reasonably held responsible for war crimes committed by such systems, then LAWS lack legal and moral legitimacy. He further argues that neither the programmers and engineers creating truly autonomous systems, nor their commanders, nor the machines themselves can be held responsible for the actions of LAWS. This would be unfair in the case of the humans and impossible in the case of the machines, which cannot be punished as they lack the capacity for phenomenal experience. I challenge the latter claim by showing that all the morally desirable goals that punishment aims for in humans – incapacitation, rehabilitation and deterrence – can be effected in robots by alternative but more reliable means. My account focuses on describing how the behaviors enforced by deterrence in humans may be achieved via a mixture of prevention and threat of goal frustration, even if the retributive aspect of punishment cannot be replicated in the case of LAWS.","PeriodicalId":39180,"journal":{"name":"Journal of Military Ethics","volume":"19 1","pages":"285 - 291"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15027570.2020.1865455","citationCount":"2","resultStr":"{\"title\":\"Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem\",\"authors\":\"M. Zając\",\"doi\":\"10.1080/15027570.2020.1865455\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon Systems (LAWS) is problematic. Robert Sparrow argues that if specific agents cannot be fairly and reasonably held responsible for war crimes committed by such systems, then LAWS lack legal and moral legitimacy. He further argues that neither the programmers and engineers creating truly autonomous systems, nor their commanders, nor the machines themselves can be held responsible for the actions of LAWS. This would be unfair in the case of the humans and impossible in the case of the machines, which cannot be punished as they lack the capacity for phenomenal experience. I challenge the latter claim by showing that all the morally desirable goals that punishment aims for in humans – incapacitation, rehabilitation and deterrence – can be effected in robots by alternative but more reliable means. My account focuses on describing how the behaviors enforced by deterrence in humans may be achieved via a mixture of prevention and threat of goal frustration, even if the retributive aspect of punishment cannot be replicated in the case of LAWS.\",\"PeriodicalId\":39180,\"journal\":{\"name\":\"Journal of Military Ethics\",\"volume\":\"19 1\",\"pages\":\"285 - 291\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/15027570.2020.1865455\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Military Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/15027570.2020.1865455\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Military Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15027570.2020.1865455","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Arts and Humanities","Score":null,"Total":0}
Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem
ABSTRACT The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon Systems (LAWS) is problematic. Robert Sparrow argues that if specific agents cannot be fairly and reasonably held responsible for war crimes committed by such systems, then LAWS lack legal and moral legitimacy. He further argues that neither the programmers and engineers creating truly autonomous systems, nor their commanders, nor the machines themselves can be held responsible for the actions of LAWS. This would be unfair in the case of the humans and impossible in the case of the machines, which cannot be punished as they lack the capacity for phenomenal experience. I challenge the latter claim by showing that all the morally desirable goals that punishment aims for in humans – incapacitation, rehabilitation and deterrence – can be effected in robots by alternative but more reliable means. My account focuses on describing how the behaviors enforced by deterrence in humans may be achieved via a mixture of prevention and threat of goal frustration, even if the retributive aspect of punishment cannot be replicated in the case of LAWS.