Is wearing these sunglasses an attack? Obligations under IHL related to anti-AI countermeasures

IF 0.6 4区 社会学 Q2 LAW
Jonathan Kwik
{"title":"Is wearing these sunglasses an attack? Obligations under IHL related to anti-AI countermeasures","authors":"Jonathan Kwik","doi":"10.1017/s1816383124000067","DOIUrl":null,"url":null,"abstract":"<p>As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”</p>","PeriodicalId":46925,"journal":{"name":"International Review of the Red Cross","volume":"18 1","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of the Red Cross","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/s1816383124000067","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”

戴这种太阳镜是攻击吗?国际人道主义法规定的与反人工智能对策有关的义务
随着军事人工智能 (AI) 应用的扩大,反 AI 反制措施(即对抗措施)也将随之扩大。国际人道法通过其在攻击中的义务提供了许多保护,但对抗的性质在哪一方(系统用户或对手)应承担攻击者责任方面产生了模糊性。本文提供了一个从法律角度分析对抗的认知框架。它探讨了对抗的技术、战术和法律层面,并提出了一个基于可预见危害的模型,以确定何时法律责任应转移到反制措施的制定者身上。这篇文章为那些在戴上对抗性太阳镜之前思考以下问题的未来作战人员提供了启示:"我是在进行攻击吗?"我是否在进行攻击?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.10
自引率
28.60%
发文量
92
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信