Algorithmic Warfare: Taking Stock of a Research Programme

IF 1.7 Q2 INTERNATIONAL RELATIONS
Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco, Tom F. A. Watts
{"title":"Algorithmic Warfare: Taking Stock of a Research Programme","authors":"Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco, Tom F. A. Watts","doi":"10.1080/13600826.2023.2263473","DOIUrl":null,"url":null,"abstract":"ABSTRACTThis article takes stock of the ongoing debates on algorithmic warfare in the social sciences. It seeks to equip scholars in International Relations and beyond with a critical review of both the empirical context of algorithmic warfare and the different theoretical approaches to studying practices related to the integration of algorithms (including automated, autonomous, and artificial intelligence (AI) technologies) into international armed conflict. The review focuses on discussions about (1) the implications of algorithmic warfare for strategic stability, (2) the morality and ethics of algorithmic warfare, (3) how algorithmic warfare relates to the laws and norms of war, and (4) popular imaginaries of algorithmic warfare. The article foregrounds a set of open research questions capable of moving the field toward a more interdisciplinary research agenda, as well as by introducing the contributions made by other articles in this Special Issue.KEYWORDS: Algorithmsartificial intelligence (AI)warsecurity Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Both automated and autonomous technologies denote systems that, once activated, can perform some tasks without human input. In robotics, automation implies less “sophistication” than autonomy because automated systems follow a pre-programmed sequence of actions (Winfield Citation2012, 12). However, integrating automated or autonomous technologies into military decision-making and targeting triggers similar problematic consequences for human control because such technologies increase system complexity.2 AWS are defined as systems that are able to make targeting “decisions” without immediate human intervention. They may or may not be based on AI technologies (Garcia Citationforthcoming).3 Such dynamics are not restricted to the study of algorithmic warfare as the study of remote warfare, for instance, demonstrates (Biegon, Rauta, and Watts Citation2021).4 These include, for example, the Realities of Algorithmic Warfare project (PI: Lauren Gould) at the University of Utrecht and the DILEMA project (PI: Berenice Boutin) at the Asser Institute in The Hague.5 Loitering munitions manufacturers hold that such systems require human assessment and authorisation prior to the release of force. But their marketing material also appears to point to a latent technological capability such systems may have to release the use of force without prior human assessment (Bode and Watts Citation2023).6 The Martens Clause first appeared in the preamble to the 1899 Hague Convention. It is said to “fill a gap” when existing international law fails to address a situation by referring the principles of humanity and dictates of public conscience (Docherty Citation2018).7 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), Art 31/2, Art. 51/4 (b) and Art. 51/4 (c).8 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I). Distinction: Article 48, 51(2), and 52 (2). Proportionality: Articles 51(5)(b), 57(2)(a)(iii) and 57(2)(b). Precautions: Article 57 and customary international law.Additional informationFundingThis work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement number 852123). Dr. Tom F.A. Watts’ contribution to this paper was funded by a Leverhulme Trust Early Career Research Fellowship (ECF-2022-135).","PeriodicalId":46197,"journal":{"name":"Global Society","volume":"121 1","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Global Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13600826.2023.2263473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

ABSTRACTThis article takes stock of the ongoing debates on algorithmic warfare in the social sciences. It seeks to equip scholars in International Relations and beyond with a critical review of both the empirical context of algorithmic warfare and the different theoretical approaches to studying practices related to the integration of algorithms (including automated, autonomous, and artificial intelligence (AI) technologies) into international armed conflict. The review focuses on discussions about (1) the implications of algorithmic warfare for strategic stability, (2) the morality and ethics of algorithmic warfare, (3) how algorithmic warfare relates to the laws and norms of war, and (4) popular imaginaries of algorithmic warfare. The article foregrounds a set of open research questions capable of moving the field toward a more interdisciplinary research agenda, as well as by introducing the contributions made by other articles in this Special Issue.KEYWORDS: Algorithmsartificial intelligence (AI)warsecurity Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Both automated and autonomous technologies denote systems that, once activated, can perform some tasks without human input. In robotics, automation implies less “sophistication” than autonomy because automated systems follow a pre-programmed sequence of actions (Winfield Citation2012, 12). However, integrating automated or autonomous technologies into military decision-making and targeting triggers similar problematic consequences for human control because such technologies increase system complexity.2 AWS are defined as systems that are able to make targeting “decisions” without immediate human intervention. They may or may not be based on AI technologies (Garcia Citationforthcoming).3 Such dynamics are not restricted to the study of algorithmic warfare as the study of remote warfare, for instance, demonstrates (Biegon, Rauta, and Watts Citation2021).4 These include, for example, the Realities of Algorithmic Warfare project (PI: Lauren Gould) at the University of Utrecht and the DILEMA project (PI: Berenice Boutin) at the Asser Institute in The Hague.5 Loitering munitions manufacturers hold that such systems require human assessment and authorisation prior to the release of force. But their marketing material also appears to point to a latent technological capability such systems may have to release the use of force without prior human assessment (Bode and Watts Citation2023).6 The Martens Clause first appeared in the preamble to the 1899 Hague Convention. It is said to “fill a gap” when existing international law fails to address a situation by referring the principles of humanity and dictates of public conscience (Docherty Citation2018).7 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), Art 31/2, Art. 51/4 (b) and Art. 51/4 (c).8 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I). Distinction: Article 48, 51(2), and 52 (2). Proportionality: Articles 51(5)(b), 57(2)(a)(iii) and 57(2)(b). Precautions: Article 57 and customary international law.Additional informationFundingThis work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement number 852123). Dr. Tom F.A. Watts’ contribution to this paper was funded by a Leverhulme Trust Early Career Research Fellowship (ECF-2022-135).
算法之战:评估一个研究项目
摘要本文对社会科学中关于算法战的争论进行了盘点。它旨在为国际关系及其他领域的学者提供对算法战争的经验背景和与将算法(包括自动化、自主和人工智能(AI)技术)整合到国际武装冲突中相关的研究实践的不同理论方法的批判性回顾。本文重点讨论(1)算法战对战略稳定的影响,(2)算法战的道德和伦理,(3)算法战与战争法律和规范的关系,以及(4)算法战的流行想象。这篇文章提出了一系列开放的研究问题,能够将该领域推向一个更加跨学科的研究议程,并介绍了本期特刊中其他文章的贡献。关键词:算法人工智能(AI)战争安全披露声明作者未报告潜在利益冲突。注1自动化和自主技术都是指一旦激活,无需人工输入即可执行某些任务的系统。在机器人技术中,自动化意味着比自主性更少的“复杂性”,因为自动化系统遵循预先编程的动作序列(Winfield Citation2012, 12)。然而,将自动化或自主技术集成到军事决策和目标中会引发类似的人类控制问题,因为这些技术增加了系统的复杂性AWS被定义为能够在没有人为干预的情况下做出目标“决策”的系统。它们可能基于人工智能技术,也可能不基于人工智能技术例如,远程战争的研究表明,这种动态并不局限于算法战争的研究(Biegon, Rauta和Watts Citation2021)例如,乌得勒支大学的“算法战争的现实”项目(PI: Lauren Gould)和海牙Asser研究所的“DILEMA”项目(PI: Berenice Boutin)。一些军火制造商认为,在释放武力之前,这种系统需要人类进行评估和授权。但他们的营销材料似乎也指出了一种潜在的技术能力,这种系统可能必须在没有事先进行人类评估的情况下释放武力(Bode和Watts Citation2023)马滕斯条款最早出现在1899年《海牙公约》的序言中。当现有的国际法无法通过引用人道原则和公众良心的要求来解决某一情况时,它被称为“填补空白”(Docherty Citation2018)7 . 1949年8月12日《日内瓦公约》和《关于保护国际性武装冲突受难者的附加议定书》(第一议定书)、第31/2条、第51/4 (b)条和第51/4 (c)条1949年8月12日日内瓦公约的附加议定书和关于保护国际武装冲突受难者的附加议定书(第一议定书)。区别:第48条、第51条第2款和第52条第2款。相称性:第51条第5款(b)项、第57条第2款(a)项(iii)项和第57条第2款(b)项。预防措施:第57条和习惯国际法。本研究得到了欧洲研究委员会(ERC)在欧盟地平线2020研究与创新计划(资助协议号852123)下的支持。Tom F.A. Watts博士对本文的贡献由Leverhulme Trust早期职业研究奖学金(ECF-2022-135)资助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Global Society
Global Society INTERNATIONAL RELATIONS-
CiteScore
3.10
自引率
6.20%
发文量
32
期刊介绍: Global Society covers the new agenda in global and international relations and encourages innovative approaches to the study of global and international issues from a range of disciplines. It promotes the analysis of transactions at multiple levels, and in particular, the way in which these transactions blur the distinction between the sub-national, national, transnational, international and global levels. An ever integrating global society raises a number of issues for global and international relations which do not fit comfortably within established "Paradigms" Among these are the international and global consequences of nationalism and struggles for identity, migration, racism, religious fundamentalism, terrorism and criminal activities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信