{"title":"军事人工智能与区别原则:国家责任视角","authors":"Magdalena Pacholska","doi":"10.1017/S0021223722000188","DOIUrl":null,"url":null,"abstract":"Abstract Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages but the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce. This contribution argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility. The following analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) – the principle of distinction – resulting from the use of AI-enabled military technologies. It reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder. The article reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL. It is proposed, however, that should the so-called fully autonomous weapon systems – that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework – ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs.","PeriodicalId":44911,"journal":{"name":"ISRAEL LAW REVIEW","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective\",\"authors\":\"Magdalena Pacholska\",\"doi\":\"10.1017/S0021223722000188\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages but the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce. This contribution argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility. The following analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) – the principle of distinction – resulting from the use of AI-enabled military technologies. It reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder. The article reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL. It is proposed, however, that should the so-called fully autonomous weapon systems – that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework – ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs.\",\"PeriodicalId\":44911,\"journal\":{\"name\":\"ISRAEL LAW REVIEW\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2022-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISRAEL LAW REVIEW\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/S0021223722000188\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISRAEL LAW REVIEW","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S0021223722000188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective
Abstract Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages but the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce. This contribution argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility. The following analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) – the principle of distinction – resulting from the use of AI-enabled military technologies. It reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder. The article reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL. It is proposed, however, that should the so-called fully autonomous weapon systems – that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework – ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs.