通过建立信任措施引领人工智能

IF 1.2 3区 社会学 Q2 INTERNATIONAL RELATIONS
Michael C. Horowitz, L. Kahn
{"title":"通过建立信任措施引领人工智能","authors":"Michael C. Horowitz, L. Kahn","doi":"10.1080/0163660X.2021.2018794","DOIUrl":null,"url":null,"abstract":"The role of artificial intelligence (AI) in military use has been the subject of intense debates in the national security community in recent years— not only the potential for AI to reshape capabilities, but also the potential for unintentional conflict and escalation. For many analysts, fear that military applications of AI would lead to increased risk of accidents and inadvertent escalation looms large, regardless of the potential benefits. Those who are concerned can cite a plethora of potential ways things can go awry with algorithms: brittleness, biased or poisoned training data, hacks by adversaries, or just increased speed of decision-making leading to fear-based escalation. Yet, given its importance for the future of military power, it is imperative that the United States moves forward with responsible speed in designing, integrating, and deploying relevant military applications of AI. How should the United States simultaneously pursue AI swiftly while reducing the risk of unintentional conflict or escalation in the United States or elsewhere? The answer may lie in US leadership to promote responsible norms and standards of behavior for AI as part of a series of confidence-building measures (CBMs) tailored to reduce the likelihood of these scenarios.","PeriodicalId":46957,"journal":{"name":"Washington Quarterly","volume":" ","pages":"91 - 106"},"PeriodicalIF":1.2000,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Leading in Artificial Intelligence through Confidence Building Measures\",\"authors\":\"Michael C. Horowitz, L. Kahn\",\"doi\":\"10.1080/0163660X.2021.2018794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The role of artificial intelligence (AI) in military use has been the subject of intense debates in the national security community in recent years— not only the potential for AI to reshape capabilities, but also the potential for unintentional conflict and escalation. For many analysts, fear that military applications of AI would lead to increased risk of accidents and inadvertent escalation looms large, regardless of the potential benefits. Those who are concerned can cite a plethora of potential ways things can go awry with algorithms: brittleness, biased or poisoned training data, hacks by adversaries, or just increased speed of decision-making leading to fear-based escalation. Yet, given its importance for the future of military power, it is imperative that the United States moves forward with responsible speed in designing, integrating, and deploying relevant military applications of AI. How should the United States simultaneously pursue AI swiftly while reducing the risk of unintentional conflict or escalation in the United States or elsewhere? The answer may lie in US leadership to promote responsible norms and standards of behavior for AI as part of a series of confidence-building measures (CBMs) tailored to reduce the likelihood of these scenarios.\",\"PeriodicalId\":46957,\"journal\":{\"name\":\"Washington Quarterly\",\"volume\":\" \",\"pages\":\"91 - 106\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2021-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Washington Quarterly\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1080/0163660X.2021.2018794\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INTERNATIONAL RELATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Washington Quarterly","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1080/0163660X.2021.2018794","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 1

摘要

近年来,人工智能在军事使用中的作用一直是国家安全界激烈辩论的主题——不仅是人工智能重塑能力的潜力,还有无意冲突和升级的潜力。对于许多分析人士来说,担心人工智能的军事应用会导致事故和意外升级的风险增加,这一担忧迫在眉睫,无论潜在的好处如何。那些担心的人可以列举出算法可能出现问题的多种潜在方式:脆弱性、有偏见或中毒的训练数据、对手的黑客攻击,或者只是决策速度加快导致基于恐惧的升级。然而,鉴于人工智能对军事力量未来的重要性,美国必须以负责任的速度在设计、集成和部署人工智能的相关军事应用方面取得进展。美国应该如何在快速追求人工智能的同时,降低美国或其他地方无意冲突或升级的风险?答案可能在于美国领导层推动负责任的人工智能行为规范和标准,作为一系列旨在降低这些情况可能性的建立信任措施的一部分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leading in Artificial Intelligence through Confidence Building Measures
The role of artificial intelligence (AI) in military use has been the subject of intense debates in the national security community in recent years— not only the potential for AI to reshape capabilities, but also the potential for unintentional conflict and escalation. For many analysts, fear that military applications of AI would lead to increased risk of accidents and inadvertent escalation looms large, regardless of the potential benefits. Those who are concerned can cite a plethora of potential ways things can go awry with algorithms: brittleness, biased or poisoned training data, hacks by adversaries, or just increased speed of decision-making leading to fear-based escalation. Yet, given its importance for the future of military power, it is imperative that the United States moves forward with responsible speed in designing, integrating, and deploying relevant military applications of AI. How should the United States simultaneously pursue AI swiftly while reducing the risk of unintentional conflict or escalation in the United States or elsewhere? The answer may lie in US leadership to promote responsible norms and standards of behavior for AI as part of a series of confidence-building measures (CBMs) tailored to reduce the likelihood of these scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.90
自引率
5.90%
发文量
20
期刊介绍: The Washington Quarterly (TWQ) is a journal of global affairs that analyzes strategic security challenges, changes, and their public policy implications. TWQ is published out of one of the world"s preeminent international policy institutions, the Center for Strategic and International Studies (CSIS), and addresses topics such as: •The U.S. role in the world •Emerging great powers: Europe, China, Russia, India, and Japan •Regional issues and flashpoints, particularly in the Middle East and Asia •Weapons of mass destruction proliferation and missile defenses •Global perspectives to reduce terrorism Contributors are drawn from outside as well as inside the United States and reflect diverse political, regional, and professional perspectives.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信