The Consequences of Generative AI for Democracy, Governance and War

IF 1.5 3区 社会学 Q2 INTERNATIONAL RELATIONS
Steven Feldstein
{"title":"The Consequences of Generative AI for Democracy, Governance and War","authors":"Steven Feldstein","doi":"10.1080/00396338.2023.2261260","DOIUrl":null,"url":null,"abstract":"AbstractThe potential impact of generative AI across politics, governance and war is enormous, and is the subject of considerable speculation informed by few hard facts. Yet it is possible to identify some major challenges. They include threats to democracies by privately controlled models that gain tremendous power to shape discourse and affect democratic deliberation; enhanced surveillance and propaganda dissemination by authoritarian regimes; new capacities for criminal and terrorist actors to carry out cyber attacks and related disruptions; and transformed war planning and military operations reflecting the accelerated dehumanisation of lethal force. While new innovations historically require time to take root, generative AI is likely to be adopted swiftly. Stakeholders must formulate pragmatic approaches to manage oncoming risks.Key words: Artificial intelligence (AI)chatbotsChatGPTcyber attackslarge language model (LLM)military planningpropagandasurveillance AcknowledgementsI would like to thank Tom Carothers, Matt O’Shaughnessy and Gavin Wilde for their valuable comments and feedback, and Brian (Chun Hey) Kot for his research assistance.Notes1 See Rishi Bommasani et al., ‘On the Opportunities and Risks of Foundation Models’, Center for Research on Foundational Models, Stanford University, 12 July 2022, https://crfm.stanford.edu/assets/report.pdf; and Helen Toner, ‘What Are Generative AI, Large Language Models, and Foundation Models?’, Center for Security and Emerging Technology, Georgetown University, May 2023, https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.2 See Kevin Roose, ‘How Does ChatGPT Really Work?’, New York Times, 28 March 2023, https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html.3 See Jordan Hoffmann et al., ‘An Empirical Analysis of Computeoptimal Large Language Model Training’, Google DeepMind, 12 April 2022, https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training; and Pranshu Verma and Kevin Schaul, ‘See Why AI Like ChatGPT Has Gotten So Good, So Fast’, Washington Post, 24 May 2023, https://www.washingtonpost.com/business/interactive/2023/artificial-intelligence-tech-rapid-advances/.4 See Tom B. Brown et al., ‘Language Models Are Few-shot Learners’, 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada, 22 July 2020, https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.5 See Lukas Esterle, ‘Deep Learning in Multiagent Systems’, in Alexandros Iosifidis and Anastasios Tefas (eds), Deep Learning for Robot Perception and Cognition (Cambridge, MA: Academic Press, 2022), pp. 435–60; and David Nield, ‘Supercharge Your ChatGPT Prompts with Auto-GPT’, Wired, 21 May 2023, https://www.wired.co.uk/article/chatgpt-prompts-auto-gpt. It is worth noting that the autonomy of an AI system sits on a spectrum, rather than being binary. While the goal of developers is to increase the ability of AI systems to complete increasingly complex tasks, this will be a slow evolution rather than a sudden jump in capabilities.6 See Chloe Xiang, ‘Developers Are Connecting Multiple AI Agents to Make More “Autonomous” AI’, Vice, 4 April 2023, https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai.7 See Mark Sullivan, ‘Auto-GPT and BabyAGI: How “Autonomous Agents” Are Bringing Generative AI to the Masses’, Fast Company, 13 April 2023, https://www.fastcompany.com/90880294/auto-gpt-and-babyagi-how-autonomous-agents-are-bringing-generative-ai-to-the-masses.8 See, for example, Josh Zumbrun, ‘Why ChatGPT Is Getting Dumber at Basic Math’, Wall Street Journal, 4 August 2023, https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0.9 See, for example, Tristan Bove, ‘Bill Gates Says that the A.I. Revolution Means Everyone Will Have Their Own “White Collar” Personal Assistant’, Fortune, 6 May 2023, https://fortune.com/2023/03/22/bill-gates-ai-work-productivity-personal-assistants-chatgpt/.10 Gary Marcus, ‘Senate Testimony’, US Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, 118th Congress, 16 May 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf.11 See Davey Alba, ‘OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails’, Bloomberg, 8 December 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.12 See Emily M. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, pp. 610–23, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.13 Hannes Bajohr, ‘Whoever Controls Language Models Controls Politics’, 8 April 2023, https://hannesbajohr.de/en/2023/04/08/whoever-controls-language-models-controls-politics/.14 Ibid.15 See Steven Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations: How Will This Influence Global Norms?’, Democratization, 2023, pp. 1–18.16 See Kayleen Devlin and Joshua Cheetham, ‘Fake Trump Arrest Photos: How to Spot an AI-generated Image’, BBC News, 24 March 2023, https://www.bbc.com/news/world-us-canada-65069316.17 ‘Beat Biden’, YouTube, 25 April 2023, https://www.youtube.com/watch?v=kLMMxgtxQ1Y. See also Isaac Stanley-Becker and John Wagner, ‘Republicans Counter Biden Announcement with Dystopian, AI-aided Video’, Washington Post, 25 April 2023, https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/.18 See Andrew R. Sorkin et al., ‘An A.I.generated Spoof Rattles the Markets’, New York Times, 23 May 2023, https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.19 See Josh A. Goldstein et al., ‘Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations’, January 2023, https://cdn.openai.com/papers/forecasting-misuse.pdf.20 See Thor Benson, ‘Brace Yourself for the 2024 Deepfake Election’, Wired, 27 April 2023, https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.21 Goldstein et al., ‘Generative Language Models and Automated Influence Operations’.22 Josh A. Goldstein and Girish Sastry, ‘The Coming Age of AI-powered Propaganda’, Foreign Affairs, 27 April 2023, https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda.23 See Ben M. Tappin et al., ‘Quantifying the Potential Persuasive Returns to Political Microtargeting’, Proceedings of the National Academy of Sciences, vol. 120, no. 25, June 2023, https://www.pnas.org/doi/10.1073/pnas.2216261120. The literature on disinformation is not settled about how much false online information impacts and undermines democracy. See, for example, Jon Bateman et al., ‘Measuring the Effects of Influence Operations: Key Findings and Gaps from Empirical Research’, Carnegie Endowment for International Peace – PCIO Baseline, 28 June 2021, https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824; and Joshua A. Tucker et al., ‘Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature’, Hewlett Foundation, 19 March 2018, https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.24 See Nathan E. Sanders and Bruce Schneier, ‘How ChatGPT Hijacks Democracy’, New York Times, 15 January 2023, https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.25 See Sarah Kreps and Douglas Kriner, ‘How Generative AI Impacts Democratic Engagement’, Brookings Institution, 21 March 2023, https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/.26 See Steven Feldstein, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, September 2019, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847; Steven Feldstein, ‘How Artificial Intelligence Is Reshaping Repression’, Journal of Democracy, vol. 30, no. 1, January 2019, pp. 40–52; Steven Feldstein, The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (New York: Oxford University Press, 2021); Andrea Kendall-Taylor et al., ‘The Digital Dictators’, Foreign Affairs, vol. 99, no. 2, March/April 2020, pp. 103–15; and Nicholas Wright, ‘How Artificial Intelligence Will Reshape the Global Order’, Foreign Affairs, 10 July 2018, https://www.foreignaffairs.com/articles/world/2018-07-10/how-artificial-intelligence-will-reshape-global-order.27 Samantha Hoffman, ‘Programming China: The Communist Party’s Autonomic Approach to Managing State Security’, MERICS, 12 December 2017, https://merics.org/sites/default/files/2020-05/Programming%20China.pdf.28 Steven Feldstein, ‘The Global Struggle Over AI Surveillance’, National Endowment for Democracy, June 2022, https://www.ned.org/global-struggle-over-ai-surveillance-emerging-trends-democratic-responses/.29 See Dahlia Peterson, ‘How China Harnesses Data Fusion to Make Sense of Surveillance Data’, Brookings Institution, 23 September 2021, https://www.brookings.edu/articles/how-china-harnesses-data-fusion-to-make-sense-of-surveillance-data/.30 Cissy Zhou, ‘China Tells Big Tech Companies Not to Offer ChatGPT Services’, Nikkei Asia, 22 February 2023, https://asia.nikkei.com/Business/China-tech/China-tells-big-tech-companies-not-to-offer-ChatGPT-services. The list of countries in which ChatGPT is inaccessible, as of June 2023, predictably includes many authoritarian states, such as Afghanistan, China, Cuba, Iran, North Korea, Russia and Syria. Notably, Italy is also included on the list due to a ruling by its data-protection watchdog that OpenAI may be in breach of Europe’s privacy regulations. See Ryan Browne, ‘Italy Became the First Western Country to Ban ChatGPT. Here’s What Other Countries Are Doing’, CNBC, 4 April 2023, https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html; and Jon Martindale, ‘These Are the Countries Where ChatGPT Is Currently Banned’, Digital Trends, 12 April 2023, https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/.31 See Channing Lee, ‘From ChatGPT to Chat CCP: The Future of Generative AI Models in China’, Georgetown Security Studies Review, 3 March 2023, https://georgetownsecuritystudiesreview.org/2023/03/03/from-chatgpt-to-chat-ccp-the-future-of-generative-ai-models-in-china/.32 See Sophia Yang, ‘China’s ChatGPTstyle Bot ChatYuan Suspended Over Questions About Xi’, Taiwan News, 11 February 2023, https://www.taiwannews.com.tw/en/news/4807319. A Chinese CEO reportedly quipped that ‘China’s LLMs are not even allowed to count to 10, as that would include the numbers eight and nine – a reference to the state’s sensitivity about the number 89 and any discussion of the 1989 Tiananmen Square protests’. Quoted in Helen Toner et al., ‘The Illusion of China’s AI Prowess’, Foreign Affairs, 2 June 2023, https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation.33 See Paul Triolo, ‘ChatGPT and China: How to Think About Large Language Models and the Generative AI Race’, China Project, 12 April 2023, https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/.34 See Meaghan Tobin, ‘China Announces Rules to Keep AI Bound by “Core Socialist Values”’, Washington Post, 14 July 2023, https://www.washingtonpost.com/world/2023/07/14/china-ai-regulations-chatgpt-socialist/.35 See Helen Toner et al., ‘How Will China’s Generative AI Regulations Shape the Future? A DigiChina Forum’, DigiChina, Stanford University, 19 April 2023, https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.36 Toner et al., ‘The Illusion of China’s AI Prowess’.37 Training GPT-3 required 1.3 gigawatthours of electricity (equivalent to powering 121 homes in the United States for a year ) and cost $4.6m.The training costs for GPT-4 are far higher, likely exceeding $100m. See ‘Large, Creative AI Models Will Transform Lives and Labour Markets’, The Economist, 22 April 2023, https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work.38 See Lisa Barrington, ‘Abu Dhabi Makes Its Falcon 40B AI Model Open Source’, Reuters, 25 May 2023, https://www.reuters.com/technology/abu-dhabi-makes-its-falcon-40b-ai-model-open-source-2023-05-25/.39 See Cade Metz and Mike Isaac, ‘In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels’, New York Times, 18 May 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html.40 See, for example, Rebecca Tan, ‘Facebook Helped Bring Free Speech to Vietnam. Now It’s Helping Stifle It’, Washington Post, 19 June 2023, https://www.washingtonpost.com/world/2023/06/19/facebook-meta-vietnam-government-censorship/.41 See Catherine Stupp, ‘Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case’, Wall Street Journal, 30 August 2019, https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.42 Leah Nylen, ‘FTC’s Khan Says Enforcers Need to Be “Vigilant Early” with AI’, Bloomberg, 1 June 2023, https://www.bloomberg.com/news/articles/2023-06-02/ftc-s-khan-says-enforcers-need-to-be-vigilant-early-with-ai.43 See Matt Burgess, ‘The Hacking of ChatGPT Is Just Getting Started’, Wired, 13 April 2023, https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/; and Kyle Wiggers, ‘Can AI Really Be Protected from Text-based Attacks?’, TechCrunch, 24 February 2023, https://techcrunch.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/?guccounter=1.44 See Europol, ‘ChatGPT: The Impact of Large Language Models on Law Enforcement’, Tech Watch Flash Report from the Europol Innovation Lab, 27 March 2023, https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf.45 Ibid.46 See Andrew J. Lohn and Krystal A. Jackson, ‘Will AI Make Cyber Swords or Shields?’, Georgetown University’s Center for Security and Emerging Technology, August 2022, https://cset.georgetown.edu/wp-content/uploads/CSET-Will-AI-Make-Cyber-Swords-or-Shields.pdf.47 See Steven Feldstein and Brian Kot, ‘Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses’, Carnegie Endowment for International Peace, working paper, March 2023, https://carnegieendowment.org/2023/03/14/why-does-global-spyware-industry-continue-to-thrive-trends-explanations-and-responses-pub-89229.48 Ronald J. Deibert, ‘The Autocrat in Your iPhone’, Foreign Affairs, 12 December 2022, https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert.49 Europol, ‘ChatGPT’.50 See Thomas Gaulkin, ‘What Happened When WMD Experts Tried to Make the GPT-4 AI Do Bad Things’, Bulletin of the Atomic Scientists, 30 March 2023, https://thebulletin.org/2023/03/what-happened-when-wmd-experts-tried-to-make-the-gpt-4-ai-do-bad-things/.51 Lauren Kahn, ‘Ground Rules for the Age of AI Warfare’, Foreign Affairs, 6 June 2023, https://www.foreignaffairs.com/world/ground-rules-age-ai-warfare.52 See David Ignatius, ‘How the Algorithm Tipped the Balance in Ukraine’, Washington Post, 19 December 2022, https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/; and Kahn, ‘Ground Rules for the Age of AI Warfare’.53 See John Antal, 7 Seconds to Die: A Military Analysis of the Second Nagorno-Karabakh War and the Future of Warfighting (Philadelphia, PA: Casemate, 2022); and Kelsey Atherton, ‘Loitering Munitions Preview the Autonomous Future of Warfare’, Brookings Institution, 4 August 2021, https://www.brookings.edu/techstream/loitering-munitions-preview-the-autonomous-future-of-warfare/.54 See Benjamin Jensen and Dan Tadross, ‘How Large-language Models Can Revolutionize Military Planning’, War on the Rocks, 12 April 2023, https://warontherocks.com/2023/04/how-large-language-models-can-revolutionize-military-planning/.55 Alexander Karp, ‘Our New Platform – A Letter from the Chief Executive Officer’, Palantir, 7 April 2023, https://www.palantir.com/newsroom/letters/our-new-platform/.56 See Alexander Ward et al., ‘Trump: “Used to Talk About” Ukraine Invasion with Putin’, Politico, 11 May 2023, https://www.politico.com/newsletters/national-security-daily/2023/05/11/trump-used-to-talk-about-ukraine-invasion-with-putin-00096394.57 Ross Andersen, ‘Never Give Artificial Intelligence the Nuclear Codes’, Atlantic, June 2023, https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/.58 See Arthur Holland Michel, ‘Known Unknowns: Data Issues and Military Autonomous Systems’, UNIDIR, 17 May 2021, https://unidir.org/known-unknowns.59 Frederik Federspiel et al., ‘Threats by Artificial Intelligence to Human Health and Human Existence’, BMJ Global Health, vol. 8, no. 5, May 2023, e010435, https://doi.org/10.1136/bmjgh-2022-010435.60 See Michael Hirsh, ‘How AI Will Revolutionize Warfare’, Foreign Policy, 11 April 2023, https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/.61 See Paul Scharre, ‘AI’s Inhuman Advantage’, War on the Rocks, 10 April 2023, https://warontherocks.com/2023/04/ais-inhuman-advantage/.62 See Benjamin Weiser and Nate Schweber, ‘The ChatGPT Lawyer Explains Himself’, New York Times, 8 June 2023, https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. See also Stew Magnuson, ‘Just In: Pentagon’s Top AI Official Addresses ChatGPT’s Possible Benefits, Risks’, National Defense, 8 March 2023, https://www.nationaldefensemagazine.org/articles/2023/3/8/pentagons-top-ai-official-addresses-chatgpts-possible-benefits-risks.63 US Department of Defense, ‘DOD Announces Establishment of Generative AI Task Force’, 10 August 2023, https://www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/. See also Mohar Chatterjee, ‘Hackers in Vegas Take on AI’, Politico, 14 August 2023, https://www.politico.com/newsletters/digital-future-daily/2023/08/14/hackers-in-vegas-take-on-ai-00111145.64 Benjamin M. Jensen et al., ‘Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence’, International Studies Review, vol. 22, no. 3, September 2020, p. 537.65 See Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security, vol. 46, no. 3, Winter 2021/2022, pp. 7–50.66 See Paul Krugman, ‘AI May Change Everything, But Probably Not Too Quickly’, New York Times, 31 March 2023, https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html.67 Paul A. David, ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’, American Economic Review, vol. 80, no. 2, May 1990, p. 356.68 See Edward L. Katzenbach, Jr, ‘The Horse Cavalry in the Twentieth Century: A Study in Policy Response’, Public Policy, vol. 7, 1958, pp. 120–49.69 Jensen et al., ‘Algorithms at War’.70 Stephanie Carvin, ‘How Not to War’, International Affairs, vol. 98, no. 5, September 2022, pp. 1,695–716.71 Krystal Hu, ‘ChatGPT Sets Record for Fastest-growing User Base’, Reuters, 2 February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.72 See Michael C. Horowitz, The Diffusion of Military Power (Princeton, NJ: Princeton University Press, 2010).73 See Michael E. O’Hanlon, ‘The Plane Truth: Fewer F-22s Mean a Stronger National Defense’, Brookings Institution, 1 September 1999, https://www.brookings.edu/research/the-plane-truth-fewer-f-22s-mean-a-stronger-national-defense/.74 See, for example, Audrey Kurth Cronin, Power to the People: How Open Technological Innovation Is Arming Tomorrow’s Terrorists (Oxford: Oxford University Press, 2019); Ben FitzGerald and Jacqueline Parziale, ‘As Technology Goes Democratic, Nations Lose Military Control’, Bulletin of the Atomic Scientists, vol. 73, no. 2, 2017, pp. 102–7; and Emily O. Goldman and Leslie C. Eliason, The Diffusion of Military Technology and Ideas (Stanford, CA: Stanford University Press, 2003).75 Yonah Jeremy Bob, ‘IDF Will Run Entirely on Generative AI Within a Few Years – Israeli Cyber Chief’, Jerusalem Post, 28 June 2023, https://www.jpost.com/israel-news/defense-news/article-748028.76 See ‘Regulators Target Deepfakes’, Batch, 25 January 2023, https://www.deeplearning.ai/the-batch/chinas-new-law-limits-ai-generated-media/.77 See Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations’; and Adam Satariano, ‘Europeans Take a Major Step Toward Regulating AI’, New York Times, 14 June 2023, https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html.78 See Select Committee on Artificial Intelligence of the National Science and Technology Council, ‘National Artificial Intelligence Research and Development Strategic Plan 2023 Update’, May 2023, https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.79 See Michael D. Shear, Cecilia Kang and David E. Sanger, ‘Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools’, New York Times, 21 July 2023, https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html.80 The G7 also have announced the ‘Hiroshima AI Process’, an intergovernmental task force designed to investigate the risks of generative AI. The initiative aims to increase collaboration on topics such as governance, safeguarding intellectual-property rights, transparency, disinforma-tion and responsible use of AI technologies. How much influence it will have remains to be seen. See White House, ‘G7 Hiroshima Leaders’ Communiqué’, 20 May 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.81 See ‘Governance of Superintelligence’, OpenAI, 22 May 2023, https://openai.com/blog/governance-of-superintelligence; and Billy Perrigo, ‘Exclusive: OpenAI Lobbied the EU to Water Down AI Regulation’, Time, 20 June 2023, https://time.com/6288245/openai-eu-lobbying-ai-act/.82 See Cristiano Lima, ‘Google Bucks Calls for a New AI Regulator’, Washington Post, 13 June 2023, https://www.washingtonpost.com/politics/2023/06/13/google-bucks-calls-new-ai-regulator/.83 See ‘Why Tech Giants Want to Strangle AI with Red Tape’, The Economist, 25 May 2023, https://www.economist.com/business/2023/05/25/why-tech-giants-want-to-strangle-ai-with-red-tape; and Matteo Wong, ‘AI Doomerism Is a Decoy’, Atlantic, 2 June 2023, https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/.84 See Casey Fiesler, ‘AI Has Social Consequences, But Who Pays the Price?’, Conversation, 18 April 2023, https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375.85 Abeba Birhane and Deborah Raji, ‘ChatGPT, Galactica, and the Progress Trap’, Wired, 9 December 2022, https://www.wired.com/story/large-language-models-critique/.86 Paul Scharre, ‘AI’s Gatekeepers Aren’t Prepared for What’s Coming’, Foreign Policy, 19 June 2023, https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.87 See US Department of State, ‘Political Declaration of Responsible Military Use of Artificial Intelligence and Autonomy’, 16 February 2023, https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.88 See US Department of Defense, ‘DoD Announces Update to DoD Directive 3000.09’, 25 January 2023, https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.89 See Kahn, ‘Ground Rules for the Age of AI Warfare’.Additional informationNotes on contributorsSteven FeldsteinSteven Feldstein is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace and the author of The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (Oxford University Press, 2021). From 2014 to 2017, he served as US Deputy Assistant Secretary of State for Democracy, Human Rights, and Labor.","PeriodicalId":51535,"journal":{"name":"Survival","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Survival","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00396338.2023.2261260","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

AbstractThe potential impact of generative AI across politics, governance and war is enormous, and is the subject of considerable speculation informed by few hard facts. Yet it is possible to identify some major challenges. They include threats to democracies by privately controlled models that gain tremendous power to shape discourse and affect democratic deliberation; enhanced surveillance and propaganda dissemination by authoritarian regimes; new capacities for criminal and terrorist actors to carry out cyber attacks and related disruptions; and transformed war planning and military operations reflecting the accelerated dehumanisation of lethal force. While new innovations historically require time to take root, generative AI is likely to be adopted swiftly. Stakeholders must formulate pragmatic approaches to manage oncoming risks.Key words: Artificial intelligence (AI)chatbotsChatGPTcyber attackslarge language model (LLM)military planningpropagandasurveillance AcknowledgementsI would like to thank Tom Carothers, Matt O’Shaughnessy and Gavin Wilde for their valuable comments and feedback, and Brian (Chun Hey) Kot for his research assistance.Notes1 See Rishi Bommasani et al., ‘On the Opportunities and Risks of Foundation Models’, Center for Research on Foundational Models, Stanford University, 12 July 2022, https://crfm.stanford.edu/assets/report.pdf; and Helen Toner, ‘What Are Generative AI, Large Language Models, and Foundation Models?’, Center for Security and Emerging Technology, Georgetown University, May 2023, https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.2 See Kevin Roose, ‘How Does ChatGPT Really Work?’, New York Times, 28 March 2023, https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html.3 See Jordan Hoffmann et al., ‘An Empirical Analysis of Computeoptimal Large Language Model Training’, Google DeepMind, 12 April 2022, https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training; and Pranshu Verma and Kevin Schaul, ‘See Why AI Like ChatGPT Has Gotten So Good, So Fast’, Washington Post, 24 May 2023, https://www.washingtonpost.com/business/interactive/2023/artificial-intelligence-tech-rapid-advances/.4 See Tom B. Brown et al., ‘Language Models Are Few-shot Learners’, 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada, 22 July 2020, https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.5 See Lukas Esterle, ‘Deep Learning in Multiagent Systems’, in Alexandros Iosifidis and Anastasios Tefas (eds), Deep Learning for Robot Perception and Cognition (Cambridge, MA: Academic Press, 2022), pp. 435–60; and David Nield, ‘Supercharge Your ChatGPT Prompts with Auto-GPT’, Wired, 21 May 2023, https://www.wired.co.uk/article/chatgpt-prompts-auto-gpt. It is worth noting that the autonomy of an AI system sits on a spectrum, rather than being binary. While the goal of developers is to increase the ability of AI systems to complete increasingly complex tasks, this will be a slow evolution rather than a sudden jump in capabilities.6 See Chloe Xiang, ‘Developers Are Connecting Multiple AI Agents to Make More “Autonomous” AI’, Vice, 4 April 2023, https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai.7 See Mark Sullivan, ‘Auto-GPT and BabyAGI: How “Autonomous Agents” Are Bringing Generative AI to the Masses’, Fast Company, 13 April 2023, https://www.fastcompany.com/90880294/auto-gpt-and-babyagi-how-autonomous-agents-are-bringing-generative-ai-to-the-masses.8 See, for example, Josh Zumbrun, ‘Why ChatGPT Is Getting Dumber at Basic Math’, Wall Street Journal, 4 August 2023, https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0.9 See, for example, Tristan Bove, ‘Bill Gates Says that the A.I. Revolution Means Everyone Will Have Their Own “White Collar” Personal Assistant’, Fortune, 6 May 2023, https://fortune.com/2023/03/22/bill-gates-ai-work-productivity-personal-assistants-chatgpt/.10 Gary Marcus, ‘Senate Testimony’, US Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, 118th Congress, 16 May 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf.11 See Davey Alba, ‘OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails’, Bloomberg, 8 December 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.12 See Emily M. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, pp. 610–23, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.13 Hannes Bajohr, ‘Whoever Controls Language Models Controls Politics’, 8 April 2023, https://hannesbajohr.de/en/2023/04/08/whoever-controls-language-models-controls-politics/.14 Ibid.15 See Steven Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations: How Will This Influence Global Norms?’, Democratization, 2023, pp. 1–18.16 See Kayleen Devlin and Joshua Cheetham, ‘Fake Trump Arrest Photos: How to Spot an AI-generated Image’, BBC News, 24 March 2023, https://www.bbc.com/news/world-us-canada-65069316.17 ‘Beat Biden’, YouTube, 25 April 2023, https://www.youtube.com/watch?v=kLMMxgtxQ1Y. See also Isaac Stanley-Becker and John Wagner, ‘Republicans Counter Biden Announcement with Dystopian, AI-aided Video’, Washington Post, 25 April 2023, https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/.18 See Andrew R. Sorkin et al., ‘An A.I.generated Spoof Rattles the Markets’, New York Times, 23 May 2023, https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.19 See Josh A. Goldstein et al., ‘Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations’, January 2023, https://cdn.openai.com/papers/forecasting-misuse.pdf.20 See Thor Benson, ‘Brace Yourself for the 2024 Deepfake Election’, Wired, 27 April 2023, https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.21 Goldstein et al., ‘Generative Language Models and Automated Influence Operations’.22 Josh A. Goldstein and Girish Sastry, ‘The Coming Age of AI-powered Propaganda’, Foreign Affairs, 27 April 2023, https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda.23 See Ben M. Tappin et al., ‘Quantifying the Potential Persuasive Returns to Political Microtargeting’, Proceedings of the National Academy of Sciences, vol. 120, no. 25, June 2023, https://www.pnas.org/doi/10.1073/pnas.2216261120. The literature on disinformation is not settled about how much false online information impacts and undermines democracy. See, for example, Jon Bateman et al., ‘Measuring the Effects of Influence Operations: Key Findings and Gaps from Empirical Research’, Carnegie Endowment for International Peace – PCIO Baseline, 28 June 2021, https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824; and Joshua A. Tucker et al., ‘Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature’, Hewlett Foundation, 19 March 2018, https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.24 See Nathan E. Sanders and Bruce Schneier, ‘How ChatGPT Hijacks Democracy’, New York Times, 15 January 2023, https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.25 See Sarah Kreps and Douglas Kriner, ‘How Generative AI Impacts Democratic Engagement’, Brookings Institution, 21 March 2023, https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/.26 See Steven Feldstein, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, September 2019, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847; Steven Feldstein, ‘How Artificial Intelligence Is Reshaping Repression’, Journal of Democracy, vol. 30, no. 1, January 2019, pp. 40–52; Steven Feldstein, The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (New York: Oxford University Press, 2021); Andrea Kendall-Taylor et al., ‘The Digital Dictators’, Foreign Affairs, vol. 99, no. 2, March/April 2020, pp. 103–15; and Nicholas Wright, ‘How Artificial Intelligence Will Reshape the Global Order’, Foreign Affairs, 10 July 2018, https://www.foreignaffairs.com/articles/world/2018-07-10/how-artificial-intelligence-will-reshape-global-order.27 Samantha Hoffman, ‘Programming China: The Communist Party’s Autonomic Approach to Managing State Security’, MERICS, 12 December 2017, https://merics.org/sites/default/files/2020-05/Programming%20China.pdf.28 Steven Feldstein, ‘The Global Struggle Over AI Surveillance’, National Endowment for Democracy, June 2022, https://www.ned.org/global-struggle-over-ai-surveillance-emerging-trends-democratic-responses/.29 See Dahlia Peterson, ‘How China Harnesses Data Fusion to Make Sense of Surveillance Data’, Brookings Institution, 23 September 2021, https://www.brookings.edu/articles/how-china-harnesses-data-fusion-to-make-sense-of-surveillance-data/.30 Cissy Zhou, ‘China Tells Big Tech Companies Not to Offer ChatGPT Services’, Nikkei Asia, 22 February 2023, https://asia.nikkei.com/Business/China-tech/China-tells-big-tech-companies-not-to-offer-ChatGPT-services. The list of countries in which ChatGPT is inaccessible, as of June 2023, predictably includes many authoritarian states, such as Afghanistan, China, Cuba, Iran, North Korea, Russia and Syria. Notably, Italy is also included on the list due to a ruling by its data-protection watchdog that OpenAI may be in breach of Europe’s privacy regulations. See Ryan Browne, ‘Italy Became the First Western Country to Ban ChatGPT. Here’s What Other Countries Are Doing’, CNBC, 4 April 2023, https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html; and Jon Martindale, ‘These Are the Countries Where ChatGPT Is Currently Banned’, Digital Trends, 12 April 2023, https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/.31 See Channing Lee, ‘From ChatGPT to Chat CCP: The Future of Generative AI Models in China’, Georgetown Security Studies Review, 3 March 2023, https://georgetownsecuritystudiesreview.org/2023/03/03/from-chatgpt-to-chat-ccp-the-future-of-generative-ai-models-in-china/.32 See Sophia Yang, ‘China’s ChatGPTstyle Bot ChatYuan Suspended Over Questions About Xi’, Taiwan News, 11 February 2023, https://www.taiwannews.com.tw/en/news/4807319. A Chinese CEO reportedly quipped that ‘China’s LLMs are not even allowed to count to 10, as that would include the numbers eight and nine – a reference to the state’s sensitivity about the number 89 and any discussion of the 1989 Tiananmen Square protests’. Quoted in Helen Toner et al., ‘The Illusion of China’s AI Prowess’, Foreign Affairs, 2 June 2023, https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation.33 See Paul Triolo, ‘ChatGPT and China: How to Think About Large Language Models and the Generative AI Race’, China Project, 12 April 2023, https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/.34 See Meaghan Tobin, ‘China Announces Rules to Keep AI Bound by “Core Socialist Values”’, Washington Post, 14 July 2023, https://www.washingtonpost.com/world/2023/07/14/china-ai-regulations-chatgpt-socialist/.35 See Helen Toner et al., ‘How Will China’s Generative AI Regulations Shape the Future? A DigiChina Forum’, DigiChina, Stanford University, 19 April 2023, https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.36 Toner et al., ‘The Illusion of China’s AI Prowess’.37 Training GPT-3 required 1.3 gigawatthours of electricity (equivalent to powering 121 homes in the United States for a year ) and cost $4.6m.The training costs for GPT-4 are far higher, likely exceeding $100m. See ‘Large, Creative AI Models Will Transform Lives and Labour Markets’, The Economist, 22 April 2023, https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work.38 See Lisa Barrington, ‘Abu Dhabi Makes Its Falcon 40B AI Model Open Source’, Reuters, 25 May 2023, https://www.reuters.com/technology/abu-dhabi-makes-its-falcon-40b-ai-model-open-source-2023-05-25/.39 See Cade Metz and Mike Isaac, ‘In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels’, New York Times, 18 May 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html.40 See, for example, Rebecca Tan, ‘Facebook Helped Bring Free Speech to Vietnam. Now It’s Helping Stifle It’, Washington Post, 19 June 2023, https://www.washingtonpost.com/world/2023/06/19/facebook-meta-vietnam-government-censorship/.41 See Catherine Stupp, ‘Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case’, Wall Street Journal, 30 August 2019, https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.42 Leah Nylen, ‘FTC’s Khan Says Enforcers Need to Be “Vigilant Early” with AI’, Bloomberg, 1 June 2023, https://www.bloomberg.com/news/articles/2023-06-02/ftc-s-khan-says-enforcers-need-to-be-vigilant-early-with-ai.43 See Matt Burgess, ‘The Hacking of ChatGPT Is Just Getting Started’, Wired, 13 April 2023, https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/; and Kyle Wiggers, ‘Can AI Really Be Protected from Text-based Attacks?’, TechCrunch, 24 February 2023, https://techcrunch.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/?guccounter=1.44 See Europol, ‘ChatGPT: The Impact of Large Language Models on Law Enforcement’, Tech Watch Flash Report from the Europol Innovation Lab, 27 March 2023, https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf.45 Ibid.46 See Andrew J. Lohn and Krystal A. Jackson, ‘Will AI Make Cyber Swords or Shields?’, Georgetown University’s Center for Security and Emerging Technology, August 2022, https://cset.georgetown.edu/wp-content/uploads/CSET-Will-AI-Make-Cyber-Swords-or-Shields.pdf.47 See Steven Feldstein and Brian Kot, ‘Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses’, Carnegie Endowment for International Peace, working paper, March 2023, https://carnegieendowment.org/2023/03/14/why-does-global-spyware-industry-continue-to-thrive-trends-explanations-and-responses-pub-89229.48 Ronald J. Deibert, ‘The Autocrat in Your iPhone’, Foreign Affairs, 12 December 2022, https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert.49 Europol, ‘ChatGPT’.50 See Thomas Gaulkin, ‘What Happened When WMD Experts Tried to Make the GPT-4 AI Do Bad Things’, Bulletin of the Atomic Scientists, 30 March 2023, https://thebulletin.org/2023/03/what-happened-when-wmd-experts-tried-to-make-the-gpt-4-ai-do-bad-things/.51 Lauren Kahn, ‘Ground Rules for the Age of AI Warfare’, Foreign Affairs, 6 June 2023, https://www.foreignaffairs.com/world/ground-rules-age-ai-warfare.52 See David Ignatius, ‘How the Algorithm Tipped the Balance in Ukraine’, Washington Post, 19 December 2022, https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/; and Kahn, ‘Ground Rules for the Age of AI Warfare’.53 See John Antal, 7 Seconds to Die: A Military Analysis of the Second Nagorno-Karabakh War and the Future of Warfighting (Philadelphia, PA: Casemate, 2022); and Kelsey Atherton, ‘Loitering Munitions Preview the Autonomous Future of Warfare’, Brookings Institution, 4 August 2021, https://www.brookings.edu/techstream/loitering-munitions-preview-the-autonomous-future-of-warfare/.54 See Benjamin Jensen and Dan Tadross, ‘How Large-language Models Can Revolutionize Military Planning’, War on the Rocks, 12 April 2023, https://warontherocks.com/2023/04/how-large-language-models-can-revolutionize-military-planning/.55 Alexander Karp, ‘Our New Platform – A Letter from the Chief Executive Officer’, Palantir, 7 April 2023, https://www.palantir.com/newsroom/letters/our-new-platform/.56 See Alexander Ward et al., ‘Trump: “Used to Talk About” Ukraine Invasion with Putin’, Politico, 11 May 2023, https://www.politico.com/newsletters/national-security-daily/2023/05/11/trump-used-to-talk-about-ukraine-invasion-with-putin-00096394.57 Ross Andersen, ‘Never Give Artificial Intelligence the Nuclear Codes’, Atlantic, June 2023, https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/.58 See Arthur Holland Michel, ‘Known Unknowns: Data Issues and Military Autonomous Systems’, UNIDIR, 17 May 2021, https://unidir.org/known-unknowns.59 Frederik Federspiel et al., ‘Threats by Artificial Intelligence to Human Health and Human Existence’, BMJ Global Health, vol. 8, no. 5, May 2023, e010435, https://doi.org/10.1136/bmjgh-2022-010435.60 See Michael Hirsh, ‘How AI Will Revolutionize Warfare’, Foreign Policy, 11 April 2023, https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/.61 See Paul Scharre, ‘AI’s Inhuman Advantage’, War on the Rocks, 10 April 2023, https://warontherocks.com/2023/04/ais-inhuman-advantage/.62 See Benjamin Weiser and Nate Schweber, ‘The ChatGPT Lawyer Explains Himself’, New York Times, 8 June 2023, https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. See also Stew Magnuson, ‘Just In: Pentagon’s Top AI Official Addresses ChatGPT’s Possible Benefits, Risks’, National Defense, 8 March 2023, https://www.nationaldefensemagazine.org/articles/2023/3/8/pentagons-top-ai-official-addresses-chatgpts-possible-benefits-risks.63 US Department of Defense, ‘DOD Announces Establishment of Generative AI Task Force’, 10 August 2023, https://www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/. See also Mohar Chatterjee, ‘Hackers in Vegas Take on AI’, Politico, 14 August 2023, https://www.politico.com/newsletters/digital-future-daily/2023/08/14/hackers-in-vegas-take-on-ai-00111145.64 Benjamin M. Jensen et al., ‘Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence’, International Studies Review, vol. 22, no. 3, September 2020, p. 537.65 See Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security, vol. 46, no. 3, Winter 2021/2022, pp. 7–50.66 See Paul Krugman, ‘AI May Change Everything, But Probably Not Too Quickly’, New York Times, 31 March 2023, https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html.67 Paul A. David, ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’, American Economic Review, vol. 80, no. 2, May 1990, p. 356.68 See Edward L. Katzenbach, Jr, ‘The Horse Cavalry in the Twentieth Century: A Study in Policy Response’, Public Policy, vol. 7, 1958, pp. 120–49.69 Jensen et al., ‘Algorithms at War’.70 Stephanie Carvin, ‘How Not to War’, International Affairs, vol. 98, no. 5, September 2022, pp. 1,695–716.71 Krystal Hu, ‘ChatGPT Sets Record for Fastest-growing User Base’, Reuters, 2 February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.72 See Michael C. Horowitz, The Diffusion of Military Power (Princeton, NJ: Princeton University Press, 2010).73 See Michael E. O’Hanlon, ‘The Plane Truth: Fewer F-22s Mean a Stronger National Defense’, Brookings Institution, 1 September 1999, https://www.brookings.edu/research/the-plane-truth-fewer-f-22s-mean-a-stronger-national-defense/.74 See, for example, Audrey Kurth Cronin, Power to the People: How Open Technological Innovation Is Arming Tomorrow’s Terrorists (Oxford: Oxford University Press, 2019); Ben FitzGerald and Jacqueline Parziale, ‘As Technology Goes Democratic, Nations Lose Military Control’, Bulletin of the Atomic Scientists, vol. 73, no. 2, 2017, pp. 102–7; and Emily O. Goldman and Leslie C. Eliason, The Diffusion of Military Technology and Ideas (Stanford, CA: Stanford University Press, 2003).75 Yonah Jeremy Bob, ‘IDF Will Run Entirely on Generative AI Within a Few Years – Israeli Cyber Chief’, Jerusalem Post, 28 June 2023, https://www.jpost.com/israel-news/defense-news/article-748028.76 See ‘Regulators Target Deepfakes’, Batch, 25 January 2023, https://www.deeplearning.ai/the-batch/chinas-new-law-limits-ai-generated-media/.77 See Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations’; and Adam Satariano, ‘Europeans Take a Major Step Toward Regulating AI’, New York Times, 14 June 2023, https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html.78 See Select Committee on Artificial Intelligence of the National Science and Technology Council, ‘National Artificial Intelligence Research and Development Strategic Plan 2023 Update’, May 2023, https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.79 See Michael D. Shear, Cecilia Kang and David E. Sanger, ‘Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools’, New York Times, 21 July 2023, https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html.80 The G7 also have announced the ‘Hiroshima AI Process’, an intergovernmental task force designed to investigate the risks of generative AI. The initiative aims to increase collaboration on topics such as governance, safeguarding intellectual-property rights, transparency, disinforma-tion and responsible use of AI technologies. How much influence it will have remains to be seen. See White House, ‘G7 Hiroshima Leaders’ Communiqué’, 20 May 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.81 See ‘Governance of Superintelligence’, OpenAI, 22 May 2023, https://openai.com/blog/governance-of-superintelligence; and Billy Perrigo, ‘Exclusive: OpenAI Lobbied the EU to Water Down AI Regulation’, Time, 20 June 2023, https://time.com/6288245/openai-eu-lobbying-ai-act/.82 See Cristiano Lima, ‘Google Bucks Calls for a New AI Regulator’, Washington Post, 13 June 2023, https://www.washingtonpost.com/politics/2023/06/13/google-bucks-calls-new-ai-regulator/.83 See ‘Why Tech Giants Want to Strangle AI with Red Tape’, The Economist, 25 May 2023, https://www.economist.com/business/2023/05/25/why-tech-giants-want-to-strangle-ai-with-red-tape; and Matteo Wong, ‘AI Doomerism Is a Decoy’, Atlantic, 2 June 2023, https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/.84 See Casey Fiesler, ‘AI Has Social Consequences, But Who Pays the Price?’, Conversation, 18 April 2023, https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375.85 Abeba Birhane and Deborah Raji, ‘ChatGPT, Galactica, and the Progress Trap’, Wired, 9 December 2022, https://www.wired.com/story/large-language-models-critique/.86 Paul Scharre, ‘AI’s Gatekeepers Aren’t Prepared for What’s Coming’, Foreign Policy, 19 June 2023, https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.87 See US Department of State, ‘Political Declaration of Responsible Military Use of Artificial Intelligence and Autonomy’, 16 February 2023, https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.88 See US Department of Defense, ‘DoD Announces Update to DoD Directive 3000.09’, 25 January 2023, https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.89 See Kahn, ‘Ground Rules for the Age of AI Warfare’.Additional informationNotes on contributorsSteven FeldsteinSteven Feldstein is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace and the author of The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (Oxford University Press, 2021). From 2014 to 2017, he served as US Deputy Assistant Secretary of State for Democracy, Human Rights, and Labor.
生成式人工智能对民主、治理和战争的影响
生成式人工智能在政治、治理和战争方面的潜在影响是巨大的,并且是很少有确凿事实的大量猜测的主题。然而,我们有可能发现一些重大挑战。它们包括私人控制的模式对民主的威胁,这些模式获得了塑造话语和影响民主审议的巨大力量;独裁政权加强监视和宣传传播;犯罪分子和恐怖分子实施网络攻击和相关破坏的新能力;改变了战争计划和军事行动,反映出致命武力的加速非人化。虽然新的创新在历史上需要时间来扎根,但生成式人工智能可能会很快被采用。利益相关者必须制定务实的方法来管理即将到来的风险。关键词:人工智能(AI)聊天机器人网络攻击大语言模型(LLM)军事规划宣传监视感谢Tom Carothers, Matt O 'Shaughnessy和Gavin Wilde的宝贵意见和反馈,以及Brian (Chun Hey) Kot的研究协助。注1参见Rishi Bommasani等人,“On Opportunities and Risks of Foundation Models”,斯坦福大学基础模型研究中心,2022年7月12日,https://crfm.stanford.edu/assets/report.pdf;Helen Toner,“什么是生成式人工智能,大型语言模型和基础模型?”,乔治城大学安全与新兴技术中心,2023年5月,https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.2参见Kevin Roose,“ChatGPT如何真正工作?”,《纽约时报》,2023年3月28日,https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html.3见Jordan Hoffmann等人,“计算机最优大型语言模型训练的实证分析”,谷歌DeepMind, 2022年4月12日,https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training;和Pranshu Verma和Kevin Schaul,“看看为什么像ChatGPT这样的人工智能变得如此好,如此快”,华盛顿邮报,2023年5月24日,https://www.washingtonpost.com/business/interactive/2023/artificial-intelligence-tech-rapid-advances/.4见Tom B. Brown等人,“语言模型是几次学习”,第34届神经信息处理系统会议(Neur IPS 2020),加拿大温哥华,2020年7月22日。https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.5见Lukas Esterle,“多智能体系统中的深度学习”,Alexandros Iosifidis和Anastasios Tefas(主编),机器人感知和认知的深度学习(剑桥,MA:学术出版社,2022),第435-60页;和David Nield,“用Auto-GPT增压您的ChatGPT提示”,Wired, 2023年5月21日,https://www.wired.co.uk/article/chatgpt-prompts-auto-gpt。值得注意的是,人工智能系统的自主性位于一个频谱上,而不是二元的。虽然开发人员的目标是提高人工智能系统完成日益复杂任务的能力,但这将是一个缓慢的进化过程,而不是能力的突然跃升参见Chloe Xiang,“开发人员正在连接多个AI代理以制造更多“自主”AI”,Vice, 2023年4月4日,https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai.7参见Mark Sullivan,“Auto-GPT和BabyAGI:“自主代理”如何将生成式人工智能带给大众”,Fast Company, 2023年4月13日,https://www.fastcompany.com/90880294/auto-gpt-and-babyagi-how-autonomous-agents-are-bringing-generative-ai-to-the-masses.8参见Josh Zumbrun,“为什么ChatGPT在基础数学上变得越来越愚蠢”,华尔街日报,2023年8月4日,https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0.9参见,例如,Tristan Bove,《比尔·盖茨说,人工智能革命意味着每个人都将拥有自己的“白领”私人助理》,《财富》,2023年5月6日,https://fortune.com/2023/03/22/bill-gates-ai-work-productivity-personal-assistants-chatgpt/.10加里·马库斯,《参议院证词》,美国参议院司法委员会,隐私、技术和法律小组委员会,第118届国会,2023年5月16日,https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf.11见戴维·阿尔巴,彭博社,2022年12月8日,https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.12见Emily M. Bender等人,“关于随机鹦鹉的危险:语言模型会太大吗?, 2021年ACM公平,问责制和透明度会议论文集,2021年3月,第610-23页,https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.13 Hannes Bajohr,“谁控制语言模型就控制政治”,2023年4月8日,https://hannesbajohr。 德/ en / 2023/04/08 / whoever-controls-language-models-controls-politics / .14点同上15参见史蒂文·费尔德斯坦,“评估欧洲制定人工智能法规的努力:这将如何影响全球规范?”民主化,2023年,第1-18.16见凯琳·德夫林和约书亚·奇塔姆,“假特朗普被捕照片:如何发现人工智能生成的图像”,BBC新闻,2023年3月24日,https://www.bbc.com/news/world-us-canada-65069316.17“击败拜登”,YouTube, 2023年4月25日,https://www.youtube.com/watch?v=kLMMxgtxQ1Y。另见艾萨克·斯坦利-贝克尔和约翰·瓦格纳,“共和党人用反乌托邦的人工智能辅助视频反驳拜登的声明”,华盛顿邮报,2023年4月25日,https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/.18见安德鲁·r·索金等人,“人工智能生成的恶搞扰乱了市场”,纽约时报,2023年5月23日,https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.19见Josh A. Goldstein等人,“生成语言模型和自动化影响操作:新出现的威胁和潜在的缓解措施”,2023年1月,https://cdn.openai.com/papers/forecasting-misuse.pdf.20见Thor Benson,“为2024年Deepfake选举做好准备”,Wired, 2023年4月27日,https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.21 Goldstein等人,“生成语言模型和自动影响操作”Josh A. Goldstein和Girish Sastry,《即将到来的人工智能宣传时代》,《外交事务》,2023年4月27日,https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda.23见Ben M. Tappin等人,《量化政治微目标的潜在说服力回报》,《美国国家科学院院刊》,第120卷,第120期。2023年6月25日,https://www.pnas.org/doi/10.1073/pnas.2216261120。关于虚假信息的文献并没有确定在线虚假信息对民主的影响和破坏程度。例如,参见Jon Bateman等人,“衡量影响力运作的影响:实证研究的主要发现和差距”,卡内基国际和平基金会- PCIO基线,2021年6月28日,https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824;以及Joshua A. Tucker等人的《社交媒体、政治两极分化和政治虚假信息》;科学文献回顾”,休利特基金会,2018年3月19日,https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.24见Nathan E. Sanders和Bruce Schneier,“ChatGPT如何劫持民主”,纽约时报,2023年1月15日,https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.25见Sarah Kreps和Douglas Kriner,“生成式人工智能如何影响民主参与”,布鲁金斯学会,2023年3月21日,https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/.26参见史蒂文·费尔德斯坦,“人工智能监控的全球扩张”,卡内基国际和平基金会,2019年9月,https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847;史蒂文·费尔德斯坦,“人工智能如何重塑压制”,《民主杂志》,第30卷,第30期。1, 2019年1月,pp. 40-52;史蒂文·费尔德斯坦,《数字压制的兴起:技术如何重塑权力、政治和抵抗》(纽约:牛津大学出版社,2021年);安德里亚·肯德尔-泰勒等人,《数字独裁者》,《外交事务》,第99卷,第99期。2, 2020年3月/ 4月,第103-15页;和尼古拉斯·赖特,“人工智能将如何重塑全球秩序”,外交事务,2018年7月10日,https://www.foreignaffairs.com/articles/world/2018-07-10/how-artificial-intelligence-will-reshape-global-order.27萨曼莎·霍夫曼,“编程中国:“共产党管理国家安全的自主方法”,MERICS, 2017年12月12日,https://merics.org/sites/default/files/2020-05/Programming%20China.pdf.28 Steven Feldstein,“人工智能监控的全球斗争”,国家民主基金会,2022年6月,https://www.ned.org/global-struggle-over-ai-surveillance-emerging-trends-democratic-responses/.29见Dahlia Peterson,“中国如何利用数据融合来理解监控数据”,布鲁金斯学会,2021年9月23日,https://www.brookings.edu/articles/how-china-harnesses-data-fusion-to-make-sense-of-surveillance-data/.30 Cissy Zhou,“中国告知大型科技公司不要提供ChatGPT服务”,日经亚洲,2023年2月22日,https://asia.nikkei.com/Business/China-tech/China-tells-big-tech-companies-not-to-offer-ChatGPT-services。截至2023年6月,ChatGPT无法访问的国家名单可以预见,包括许多威权国家,如阿富汗、中国、古巴、伊朗、朝鲜、俄罗斯和叙利亚。 值得注意的是,意大利也被列入名单,因为其数据保护监管机构裁定OpenAI可能违反了欧洲的隐私法规。参见Ryan Browne,意大利成为第一个禁止ChatGPT的西方国家。这是其他国家正在做的”,CNBC, 2023年4月4日,https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html;和Jon Martindale,“这些是目前禁止ChatGPT的国家”,数字趋势,2023年4月12日,https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/.31见Channing Lee,“从ChatGPT到Chat CCP:引用自Helen Toner等人,“中国人工智能实力的错觉”,外交事务,2023年6月2日,https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation.33见Paul Triolo,“ChatGPT和中国:如何思考大型语言模型和生成式人工智能竞赛”,中国项目,2023年4月12日,https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/.34见梅根·托宾,“中国宣布规则以保持人工智能受到“社会主义核心价值观”的约束”,华盛顿邮报,2023年7月14日,https://www.washingtonpost.com/world/2023/07/14/china-ai-regulations-chatgpt-socialist/.35见海伦·托纳等人,中国的生成式人工智能法规将如何塑造未来?数字化中国论坛”,数字化中国,斯坦福大学,2023年4月19日,https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.36 Toner等人,“中国人工智能实力的错觉”培训GPT-3需要1.3千兆瓦时的电力(相当于美国121个家庭一年的用电量),耗资460万美元。GPT-4的培训费用要高得多,可能超过1亿美元。见“大,创造性的人工智能模型将改变生活和劳动力市场,《经济学人》,2023年4月22日,https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work.38看到丽莎巴林顿,“阿布扎比让猎鹰40 b AI模型开源的,路透社报道,2023年5月25日,https://www.reuters.com/technology/abu -阿布扎比-让- -猎鹰- 40 b - AI -模型-开放源代码- 2023 - 05 - 25/.39看到凯德梅斯和迈克•艾萨克”在战场上超过我。Meta决定放弃其皇冠上的珠宝”,纽约时报,2023年5月18日,https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html.40例如,Rebecca Tan,“Facebook帮助越南带来了言论自由”。现在帮助扼杀它,华盛顿邮报,2023年6月19日,https://www.washingtonpost.com/world/2023/06/19/facebook-meta-vietnam-government-censorship/.41看到凯瑟琳粗汞华,“骗子使用人工智能模拟首席执行官的声音在不寻常的网络犯罪案件”,《华尔街日报》,2019年8月30日,https://www.wsj.com/articles/fraudsters -使用- AI - -模仿- CEO -声音-在- 11567157402.42利亚Nylen不寻常——网络犯罪情况,“FTC的汗说执法者需要“警惕早期人工智能的,布隆伯格,2023年6月1日,https://www.bloomberg.com/news/articles/2023-06-02/ftc-s-khan-says-enforcers-need-to-be-vigilant-early-with-ai.43见Matt Burgess,“ChatGPT的黑客攻击才刚刚开始”,Wired, 2023年4月13日,https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/;Kyle Wiggers,“人工智能真的能免受基于文本的攻击吗?”, TechCrunch, 2023年2月24日,https://techcrunch.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/?guccounter=1.44见Europol, ' ChatGPT:《大型语言模型对执法的影响》,欧洲刑警组织创新实验室科技观察报告,2023年3月27日,https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf.45同上46见Andrew J. Lohn和Krystal A. Jackson,“人工智能将制造网络剑或盾牌?”,乔治城大学安全与新兴技术中心,2022年8月,https://cset.georgetown.edu/wp-content/uploads/CSET-Will-AI-Make-Cyber-Swords-or-Shields.pdf.47见Steven Feldstein和Brian Kot,“为什么全球间谍软件行业继续蓬勃发展?”趋势、解释和回应”,卡内基国际和平基金会,2023年3月的工作文件,https://carnegieendowment。 霍洛维茨,《军事力量的扩散》(普林斯顿,新泽西州:普林斯顿大学出版社,2010).73参见Michael E. O ' hanlon,《飞机真相:更少的f-22意味着更强大的国防》,布鲁金斯学会,1999年9月1日,https://www.brookings.edu/research/the-plane-truth-fewer-f-22s-mean-a-stronger-national-defense/.74例如,参见Audrey Kurth Cronin,《人民的权力:开放的技术创新如何武装明天的恐怖分子》(牛津:牛津大学出版社,2019);本·菲茨杰拉德和杰奎琳·帕齐亚勒,“随着技术走向民主,国家失去军事控制”,《原子科学家公报》,第73卷,第73期。2, 2017, pp. 102-7;艾米丽O.戈德曼和莱斯利C.埃利亚松,《军事技术和思想的传播》(斯坦福,加州:斯坦福大学出版社,2003年),第75页约纳·杰里米·鲍勃,“IDF将在几年内完全运行生成式人工智能-以色列网络首席执行官”,耶路撒冷邮报,2023年6月28日,https://www.jpost.com/israel-news/defense-news/article-748028.76见“监管机构目标深度伪造”,批次,2023年1月25日,https://www.deeplearning.ai/the-batch/chinas-new-law-limits-ai-generated-media/.77见费尔德斯坦,“评估欧洲推动制定人工智能法规”;Adam Satariano,“欧洲人朝着规范人工智能迈出了重要一步”,纽约时报,2023年6月14日,https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html.78参见国家科学技术委员会人工智能特别委员会,“国家人工智能研究与发展战略计划2023年更新”,2023年5月。https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.79见Michael D. Shear, Cecilia Kang和David E. Sanger,“拜登施压,人工智能公司同意在新工具上设置护栏”,纽约时报,2023年7月21日,https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html.80七国集团还宣布了“广岛人工智能进程”。这是一个政府间工作组,旨在调查生成人工智能的风险。该倡议旨在加强在治理、保护知识产权、透明度、虚假信息和负责任地使用人工智能技术等主题上的合作。它将产生多大的影响还有待观察。参见白宫,“G7广岛领导人公报”,2023年5月20日,https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.81参见“超级智能的治理”,OpenAI, 2023年5月22日,https://openai.com/blog/governance-of-superintelligence;和比利·佩里戈独家:OpenAI游说欧盟淡化AI监管”,时间,2023年6月20日,https://time.com/6288245/openai-eu-lobbying-ai-act/.82看到c利马,”谷歌美元呼吁一种新的人工智能调节器”,华盛顿邮报,2023年6月13日,https://www.washingtonpost.com/politics/2023/06/13/google-bucks-calls-new-ai-regulator/.83看到的科技巨头为什么要扼杀AI与繁文缛节,《经济学人》,2023年5月25日,https://www.economist.com/business/2023/05/25/why-tech-giants-want-to-strangle-ai-with-red-tape;和Matteo Wong,“人工智能末日论是一个诱饵”,大西洋,2023年6月2日,https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/.84见Casey Fiesler,“人工智能有社会后果,但谁来付出代价?”,对话,2023年4月18日,https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375.85 Abeba Birhane和Deborah Raji,“聊天技术、卡拉迪加和进步陷阱”,连线,2022年12月9日,https://www.wired.com/story/large-language-models-critique/.86 Paul Scharre,“人工智能的守门人没有为即将到来的事情做好准备”,外交政策,2023年6月19日,https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.87参见美国国务院,“负责任的军事使用人工智能和自主性的政治宣言”,2023年2月16日,https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.88参见美国国防部,“国防部宣布更新国防部指令3000.09”,2023年1月25日,https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.89参见Kahn,“人工智能战争时代的基本规则”。史蒂芬·费尔德斯坦(steven Feldstein)是卡内基国际和平基金会民主、冲突和治理项目的高级研究员,也是《数字压制的兴起:技术如何重塑权力、政治和抵抗》(牛津大学出版社,2021)的作者。2014年至2017年,他担任美国负责民主、人权和劳工事务的副助理国务卿。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Survival
Survival Multiple-
CiteScore
2.10
自引率
16.70%
发文量
88
期刊介绍: Survival, the Institute"s bi-monthly journal, is a leading forum for analysis and debate of international and strategic affairs. With a diverse range of authors, thoughtful reviews and review essays, Survival is scholarly in depth while vivid, well-written and policy-relevant in approach. Shaped by its editors to be both timely and forward-thinking, the journal encourages writers to challenge conventional wisdom and bring fresh, often controversial, perspectives to bear on the strategic issues of the moment. Survival is essential reading for practitioners, analysts, teachers and followers of international affairs. Each issue also contains Book Reviews of the most important recent publications on international politics and security.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信