军事人工智能中的算法与决策

IF 1.7 Q2 INTERNATIONAL RELATIONS
Denise Garcia
{"title":"军事人工智能中的算法与决策","authors":"Denise Garcia","doi":"10.1080/13600826.2023.2273484","DOIUrl":null,"url":null,"abstract":"ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.","PeriodicalId":46197,"journal":{"name":"Global Society","volume":"27 8","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithms and Decision-Making in Military Artificial Intelligence\",\"authors\":\"Denise Garcia\",\"doi\":\"10.1080/13600826.2023.2273484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.\",\"PeriodicalId\":46197,\"journal\":{\"name\":\"Global Society\",\"volume\":\"27 8\",\"pages\":\"0\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Global Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/13600826.2023.2273484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INTERNATIONAL RELATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Global Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13600826.2023.2273484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0

摘要

在探索算法决策对国际法的影响的过程中,加西亚强调了军事领域日益增长的非人性化过程,这种过程将人类减少到仅仅是数据和模式识别技术。“不道德的准则”包含了针对和杀死人类的指令,这增加了发生不可预测和意外暴力的可能性。使这一挑战更加复杂的是,缺乏对算法在社会中的普遍使用和正在进行的军事人工智能竞赛施加限制的国际法。Garcia认为,目前在国际人道主义法下制定的管理“硬件”的国际机制不足以抵御基于算法的武器带来的“软件”挑战。相反,国际法以人为本的原则被算法决策以及大国竞争引发的更多暴力和不稳定所侵蚀。国际规则需要更新,以确保禁止在人类监督之外的杀戮。关键词:人工智能算法军事国际法机器学习披露声明作者未报告潜在的利益冲突。我非常感谢Stephen Alt, Gugan Kathiresan和Jenia Browne对我的研究建议和帮助。我还要感谢Shane gravel2。在这个阶段,有一个重要的资格是有保证的。“自主性”是指机器或软件自行执行任务或功能的能力。最近,“自治”也开始涵盖范围广泛的人工智能系统参见:https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/(访问日期:2023年2月25日)感谢Gugan Kathiresan的洞察力。denise Garcia是日内瓦大学国际与发展研究所的博士,波士顿东北大学的教授,也是体验机器人研究所的创始教员之一。她曾是国际自主武器监管小组(2017-2022)的成员,目前是户田和平研究所(东京)和经济与和平研究所(悉尼)研究委员会的成员,国际机器人武器控制委员会副主席,以及电气和电子工程师学会自主和智能系统伦理全球倡议的成员。她是2017年奥斯陆诺贝尔和平研究所研究员。她曾多次获得教学奖,最近的作品发表在《自然》、《外交事务》和其他顶级期刊上。她即将出版的新书是《人工智能军事竞赛:人工智能时代的共同善治》,牛津大学出版社将于2023年出版。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Algorithms and Decision-Making in Military Artificial Intelligence
ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Global Society
Global Society INTERNATIONAL RELATIONS-
CiteScore
3.10
自引率
6.20%
发文量
32
期刊介绍: Global Society covers the new agenda in global and international relations and encourages innovative approaches to the study of global and international issues from a range of disciplines. It promotes the analysis of transactions at multiple levels, and in particular, the way in which these transactions blur the distinction between the sub-national, national, transnational, international and global levels. An ever integrating global society raises a number of issues for global and international relations which do not fit comfortably within established "Paradigms" Among these are the international and global consequences of nationalism and struggles for identity, migration, racism, religious fundamentalism, terrorism and criminal activities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信