{"title":"Is wearing these sunglasses an attack? Obligations under IHL related to anti-AI countermeasures","authors":"Jonathan Kwik","doi":"10.1017/s1816383124000067","DOIUrl":null,"url":null,"abstract":"<p>As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”</p>","PeriodicalId":46925,"journal":{"name":"International Review of the Red Cross","volume":"18 1","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of the Red Cross","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/s1816383124000067","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”
随着军事人工智能 (AI) 应用的扩大,反 AI 反制措施(即对抗措施)也将随之扩大。国际人道法通过其在攻击中的义务提供了许多保护,但对抗的性质在哪一方(系统用户或对手)应承担攻击者责任方面产生了模糊性。本文提供了一个从法律角度分析对抗的认知框架。它探讨了对抗的技术、战术和法律层面,并提出了一个基于可预见危害的模型,以确定何时法律责任应转移到反制措施的制定者身上。这篇文章为那些在戴上对抗性太阳镜之前思考以下问题的未来作战人员提供了启示:"我是在进行攻击吗?"我是否在进行攻击?