{"title":"军事人工智能中的算法与决策","authors":"Denise Garcia","doi":"10.1080/13600826.2023.2273484","DOIUrl":null,"url":null,"abstract":"ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.","PeriodicalId":46197,"journal":{"name":"Global Society","volume":"27 8","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithms and Decision-Making in Military Artificial Intelligence\",\"authors\":\"Denise Garcia\",\"doi\":\"10.1080/13600826.2023.2273484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.\",\"PeriodicalId\":46197,\"journal\":{\"name\":\"Global Society\",\"volume\":\"27 8\",\"pages\":\"0\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Global Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/13600826.2023.2273484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INTERNATIONAL RELATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Global Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13600826.2023.2273484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
Algorithms and Decision-Making in Military Artificial Intelligence
ABSTRACTAlong the line of exploring the implications of algorithmic decision-making for international law, Garcia highlights the growing dehumanization process in the military domain that reduces humans to mere data and pattern-recognizing technologies. ‘Immoral codes’ containing instructions to target and kill humans raise the likelihood of unpredictable and unintended violence. Compounding this challenge is a lack of international law that puts restraints on the pervasive use of algorithms in society and the ongoing military AI race. Garcia argues that current international mechanisms under international humanitarian law developed to regulate ‘hardware’ are not sufficient to withstand ‘software’ challenges posed by algorithmic-based weaponry. Instead, the human-centricity of international law is eroded by algorithmic decision-making and more violence and instability triggered by great power rivalry. International rules need to be updated to ensure the prohibition of killing that is outside human oversight.KEYWORDS: Artificial intelligencealgorithmsmilitaryinternational lawmachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 I am grateful to Stephen Alt, Gugan Kathiresan, and Jenia Browne for their research recommendations and assistance. I am also thankful to Shane Gravel.2 At this stage, an important qualification is warranted. “Autonomy” is a machine or software’s capacity to perform a task or function on its own. Recently, “autonomy” has also come to encompass a wide range of AI-enabled systems.3 See also: https://www.stopkillerrobots.org/stop-killer-robots/emerging-tech-and-artificial-intelligence/ (accessed 02/25/2023).4 Thanks to Gugan Kathiresan for this insight.Additional informationNotes on contributorsDenise GarciaDenise Garcia, a Ph.D. from the Graduate Institute of International and Development Studies of the University of Geneva, is a professor at Northeastern University in Boston and a founding faculty member of the Institute for Experiential Robotics. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017–2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and of Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared at Nature, Foreign Affairs, and other top journals. Her upcoming book is The AI Military Race: Common Good Governance in the Age of Artificial Intelligence with Oxford University Press 2023.
期刊介绍:
Global Society covers the new agenda in global and international relations and encourages innovative approaches to the study of global and international issues from a range of disciplines. It promotes the analysis of transactions at multiple levels, and in particular, the way in which these transactions blur the distinction between the sub-national, national, transnational, international and global levels. An ever integrating global society raises a number of issues for global and international relations which do not fit comfortably within established "Paradigms" Among these are the international and global consequences of nationalism and struggles for identity, migration, racism, religious fundamentalism, terrorism and criminal activities.