{"title":"The Consequences of Generative AI for Democracy, Governance and War","authors":"Steven Feldstein","doi":"10.1080/00396338.2023.2261260","DOIUrl":null,"url":null,"abstract":"AbstractThe potential impact of generative AI across politics, governance and war is enormous, and is the subject of considerable speculation informed by few hard facts. Yet it is possible to identify some major challenges. They include threats to democracies by privately controlled models that gain tremendous power to shape discourse and affect democratic deliberation; enhanced surveillance and propaganda dissemination by authoritarian regimes; new capacities for criminal and terrorist actors to carry out cyber attacks and related disruptions; and transformed war planning and military operations reflecting the accelerated dehumanisation of lethal force. While new innovations historically require time to take root, generative AI is likely to be adopted swiftly. Stakeholders must formulate pragmatic approaches to manage oncoming risks.Key words: Artificial intelligence (AI)chatbotsChatGPTcyber attackslarge language model (LLM)military planningpropagandasurveillance AcknowledgementsI would like to thank Tom Carothers, Matt O’Shaughnessy and Gavin Wilde for their valuable comments and feedback, and Brian (Chun Hey) Kot for his research assistance.Notes1 See Rishi Bommasani et al., ‘On the Opportunities and Risks of Foundation Models’, Center for Research on Foundational Models, Stanford University, 12 July 2022, https://crfm.stanford.edu/assets/report.pdf; and Helen Toner, ‘What Are Generative AI, Large Language Models, and Foundation Models?’, Center for Security and Emerging Technology, Georgetown University, May 2023, https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.2 See Kevin Roose, ‘How Does ChatGPT Really Work?’, New York Times, 28 March 2023, https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html.3 See Jordan Hoffmann et al., ‘An Empirical Analysis of Computeoptimal Large Language Model Training’, Google DeepMind, 12 April 2022, https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training; and Pranshu Verma and Kevin Schaul, ‘See Why AI Like ChatGPT Has Gotten So Good, So Fast’, Washington Post, 24 May 2023, https://www.washingtonpost.com/business/interactive/2023/artificial-intelligence-tech-rapid-advances/.4 See Tom B. Brown et al., ‘Language Models Are Few-shot Learners’, 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada, 22 July 2020, https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.5 See Lukas Esterle, ‘Deep Learning in Multiagent Systems’, in Alexandros Iosifidis and Anastasios Tefas (eds), Deep Learning for Robot Perception and Cognition (Cambridge, MA: Academic Press, 2022), pp. 435–60; and David Nield, ‘Supercharge Your ChatGPT Prompts with Auto-GPT’, Wired, 21 May 2023, https://www.wired.co.uk/article/chatgpt-prompts-auto-gpt. It is worth noting that the autonomy of an AI system sits on a spectrum, rather than being binary. While the goal of developers is to increase the ability of AI systems to complete increasingly complex tasks, this will be a slow evolution rather than a sudden jump in capabilities.6 See Chloe Xiang, ‘Developers Are Connecting Multiple AI Agents to Make More “Autonomous” AI’, Vice, 4 April 2023, https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai.7 See Mark Sullivan, ‘Auto-GPT and BabyAGI: How “Autonomous Agents” Are Bringing Generative AI to the Masses’, Fast Company, 13 April 2023, https://www.fastcompany.com/90880294/auto-gpt-and-babyagi-how-autonomous-agents-are-bringing-generative-ai-to-the-masses.8 See, for example, Josh Zumbrun, ‘Why ChatGPT Is Getting Dumber at Basic Math’, Wall Street Journal, 4 August 2023, https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0.9 See, for example, Tristan Bove, ‘Bill Gates Says that the A.I. Revolution Means Everyone Will Have Their Own “White Collar” Personal Assistant’, Fortune, 6 May 2023, https://fortune.com/2023/03/22/bill-gates-ai-work-productivity-personal-assistants-chatgpt/.10 Gary Marcus, ‘Senate Testimony’, US Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, 118th Congress, 16 May 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf.11 See Davey Alba, ‘OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails’, Bloomberg, 8 December 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.12 See Emily M. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, pp. 610–23, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.13 Hannes Bajohr, ‘Whoever Controls Language Models Controls Politics’, 8 April 2023, https://hannesbajohr.de/en/2023/04/08/whoever-controls-language-models-controls-politics/.14 Ibid.15 See Steven Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations: How Will This Influence Global Norms?’, Democratization, 2023, pp. 1–18.16 See Kayleen Devlin and Joshua Cheetham, ‘Fake Trump Arrest Photos: How to Spot an AI-generated Image’, BBC News, 24 March 2023, https://www.bbc.com/news/world-us-canada-65069316.17 ‘Beat Biden’, YouTube, 25 April 2023, https://www.youtube.com/watch?v=kLMMxgtxQ1Y. See also Isaac Stanley-Becker and John Wagner, ‘Republicans Counter Biden Announcement with Dystopian, AI-aided Video’, Washington Post, 25 April 2023, https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/.18 See Andrew R. Sorkin et al., ‘An A.I.generated Spoof Rattles the Markets’, New York Times, 23 May 2023, https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.19 See Josh A. Goldstein et al., ‘Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations’, January 2023, https://cdn.openai.com/papers/forecasting-misuse.pdf.20 See Thor Benson, ‘Brace Yourself for the 2024 Deepfake Election’, Wired, 27 April 2023, https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.21 Goldstein et al., ‘Generative Language Models and Automated Influence Operations’.22 Josh A. Goldstein and Girish Sastry, ‘The Coming Age of AI-powered Propaganda’, Foreign Affairs, 27 April 2023, https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda.23 See Ben M. Tappin et al., ‘Quantifying the Potential Persuasive Returns to Political Microtargeting’, Proceedings of the National Academy of Sciences, vol. 120, no. 25, June 2023, https://www.pnas.org/doi/10.1073/pnas.2216261120. The literature on disinformation is not settled about how much false online information impacts and undermines democracy. See, for example, Jon Bateman et al., ‘Measuring the Effects of Influence Operations: Key Findings and Gaps from Empirical Research’, Carnegie Endowment for International Peace – PCIO Baseline, 28 June 2021, https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824; and Joshua A. Tucker et al., ‘Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature’, Hewlett Foundation, 19 March 2018, https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.24 See Nathan E. Sanders and Bruce Schneier, ‘How ChatGPT Hijacks Democracy’, New York Times, 15 January 2023, https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.25 See Sarah Kreps and Douglas Kriner, ‘How Generative AI Impacts Democratic Engagement’, Brookings Institution, 21 March 2023, https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/.26 See Steven Feldstein, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, September 2019, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847; Steven Feldstein, ‘How Artificial Intelligence Is Reshaping Repression’, Journal of Democracy, vol. 30, no. 1, January 2019, pp. 40–52; Steven Feldstein, The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (New York: Oxford University Press, 2021); Andrea Kendall-Taylor et al., ‘The Digital Dictators’, Foreign Affairs, vol. 99, no. 2, March/April 2020, pp. 103–15; and Nicholas Wright, ‘How Artificial Intelligence Will Reshape the Global Order’, Foreign Affairs, 10 July 2018, https://www.foreignaffairs.com/articles/world/2018-07-10/how-artificial-intelligence-will-reshape-global-order.27 Samantha Hoffman, ‘Programming China: The Communist Party’s Autonomic Approach to Managing State Security’, MERICS, 12 December 2017, https://merics.org/sites/default/files/2020-05/Programming%20China.pdf.28 Steven Feldstein, ‘The Global Struggle Over AI Surveillance’, National Endowment for Democracy, June 2022, https://www.ned.org/global-struggle-over-ai-surveillance-emerging-trends-democratic-responses/.29 See Dahlia Peterson, ‘How China Harnesses Data Fusion to Make Sense of Surveillance Data’, Brookings Institution, 23 September 2021, https://www.brookings.edu/articles/how-china-harnesses-data-fusion-to-make-sense-of-surveillance-data/.30 Cissy Zhou, ‘China Tells Big Tech Companies Not to Offer ChatGPT Services’, Nikkei Asia, 22 February 2023, https://asia.nikkei.com/Business/China-tech/China-tells-big-tech-companies-not-to-offer-ChatGPT-services. The list of countries in which ChatGPT is inaccessible, as of June 2023, predictably includes many authoritarian states, such as Afghanistan, China, Cuba, Iran, North Korea, Russia and Syria. Notably, Italy is also included on the list due to a ruling by its data-protection watchdog that OpenAI may be in breach of Europe’s privacy regulations. See Ryan Browne, ‘Italy Became the First Western Country to Ban ChatGPT. Here’s What Other Countries Are Doing’, CNBC, 4 April 2023, https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html; and Jon Martindale, ‘These Are the Countries Where ChatGPT Is Currently Banned’, Digital Trends, 12 April 2023, https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/.31 See Channing Lee, ‘From ChatGPT to Chat CCP: The Future of Generative AI Models in China’, Georgetown Security Studies Review, 3 March 2023, https://georgetownsecuritystudiesreview.org/2023/03/03/from-chatgpt-to-chat-ccp-the-future-of-generative-ai-models-in-china/.32 See Sophia Yang, ‘China’s ChatGPTstyle Bot ChatYuan Suspended Over Questions About Xi’, Taiwan News, 11 February 2023, https://www.taiwannews.com.tw/en/news/4807319. A Chinese CEO reportedly quipped that ‘China’s LLMs are not even allowed to count to 10, as that would include the numbers eight and nine – a reference to the state’s sensitivity about the number 89 and any discussion of the 1989 Tiananmen Square protests’. Quoted in Helen Toner et al., ‘The Illusion of China’s AI Prowess’, Foreign Affairs, 2 June 2023, https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation.33 See Paul Triolo, ‘ChatGPT and China: How to Think About Large Language Models and the Generative AI Race’, China Project, 12 April 2023, https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/.34 See Meaghan Tobin, ‘China Announces Rules to Keep AI Bound by “Core Socialist Values”’, Washington Post, 14 July 2023, https://www.washingtonpost.com/world/2023/07/14/china-ai-regulations-chatgpt-socialist/.35 See Helen Toner et al., ‘How Will China’s Generative AI Regulations Shape the Future? A DigiChina Forum’, DigiChina, Stanford University, 19 April 2023, https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.36 Toner et al., ‘The Illusion of China’s AI Prowess’.37 Training GPT-3 required 1.3 gigawatthours of electricity (equivalent to powering 121 homes in the United States for a year ) and cost $4.6m.The training costs for GPT-4 are far higher, likely exceeding $100m. See ‘Large, Creative AI Models Will Transform Lives and Labour Markets’, The Economist, 22 April 2023, https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work.38 See Lisa Barrington, ‘Abu Dhabi Makes Its Falcon 40B AI Model Open Source’, Reuters, 25 May 2023, https://www.reuters.com/technology/abu-dhabi-makes-its-falcon-40b-ai-model-open-source-2023-05-25/.39 See Cade Metz and Mike Isaac, ‘In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels’, New York Times, 18 May 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html.40 See, for example, Rebecca Tan, ‘Facebook Helped Bring Free Speech to Vietnam. Now It’s Helping Stifle It’, Washington Post, 19 June 2023, https://www.washingtonpost.com/world/2023/06/19/facebook-meta-vietnam-government-censorship/.41 See Catherine Stupp, ‘Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case’, Wall Street Journal, 30 August 2019, https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.42 Leah Nylen, ‘FTC’s Khan Says Enforcers Need to Be “Vigilant Early” with AI’, Bloomberg, 1 June 2023, https://www.bloomberg.com/news/articles/2023-06-02/ftc-s-khan-says-enforcers-need-to-be-vigilant-early-with-ai.43 See Matt Burgess, ‘The Hacking of ChatGPT Is Just Getting Started’, Wired, 13 April 2023, https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/; and Kyle Wiggers, ‘Can AI Really Be Protected from Text-based Attacks?’, TechCrunch, 24 February 2023, https://techcrunch.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/?guccounter=1.44 See Europol, ‘ChatGPT: The Impact of Large Language Models on Law Enforcement’, Tech Watch Flash Report from the Europol Innovation Lab, 27 March 2023, https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf.45 Ibid.46 See Andrew J. Lohn and Krystal A. Jackson, ‘Will AI Make Cyber Swords or Shields?’, Georgetown University’s Center for Security and Emerging Technology, August 2022, https://cset.georgetown.edu/wp-content/uploads/CSET-Will-AI-Make-Cyber-Swords-or-Shields.pdf.47 See Steven Feldstein and Brian Kot, ‘Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses’, Carnegie Endowment for International Peace, working paper, March 2023, https://carnegieendowment.org/2023/03/14/why-does-global-spyware-industry-continue-to-thrive-trends-explanations-and-responses-pub-89229.48 Ronald J. Deibert, ‘The Autocrat in Your iPhone’, Foreign Affairs, 12 December 2022, https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert.49 Europol, ‘ChatGPT’.50 See Thomas Gaulkin, ‘What Happened When WMD Experts Tried to Make the GPT-4 AI Do Bad Things’, Bulletin of the Atomic Scientists, 30 March 2023, https://thebulletin.org/2023/03/what-happened-when-wmd-experts-tried-to-make-the-gpt-4-ai-do-bad-things/.51 Lauren Kahn, ‘Ground Rules for the Age of AI Warfare’, Foreign Affairs, 6 June 2023, https://www.foreignaffairs.com/world/ground-rules-age-ai-warfare.52 See David Ignatius, ‘How the Algorithm Tipped the Balance in Ukraine’, Washington Post, 19 December 2022, https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/; and Kahn, ‘Ground Rules for the Age of AI Warfare’.53 See John Antal, 7 Seconds to Die: A Military Analysis of the Second Nagorno-Karabakh War and the Future of Warfighting (Philadelphia, PA: Casemate, 2022); and Kelsey Atherton, ‘Loitering Munitions Preview the Autonomous Future of Warfare’, Brookings Institution, 4 August 2021, https://www.brookings.edu/techstream/loitering-munitions-preview-the-autonomous-future-of-warfare/.54 See Benjamin Jensen and Dan Tadross, ‘How Large-language Models Can Revolutionize Military Planning’, War on the Rocks, 12 April 2023, https://warontherocks.com/2023/04/how-large-language-models-can-revolutionize-military-planning/.55 Alexander Karp, ‘Our New Platform – A Letter from the Chief Executive Officer’, Palantir, 7 April 2023, https://www.palantir.com/newsroom/letters/our-new-platform/.56 See Alexander Ward et al., ‘Trump: “Used to Talk About” Ukraine Invasion with Putin’, Politico, 11 May 2023, https://www.politico.com/newsletters/national-security-daily/2023/05/11/trump-used-to-talk-about-ukraine-invasion-with-putin-00096394.57 Ross Andersen, ‘Never Give Artificial Intelligence the Nuclear Codes’, Atlantic, June 2023, https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/.58 See Arthur Holland Michel, ‘Known Unknowns: Data Issues and Military Autonomous Systems’, UNIDIR, 17 May 2021, https://unidir.org/known-unknowns.59 Frederik Federspiel et al., ‘Threats by Artificial Intelligence to Human Health and Human Existence’, BMJ Global Health, vol. 8, no. 5, May 2023, e010435, https://doi.org/10.1136/bmjgh-2022-010435.60 See Michael Hirsh, ‘How AI Will Revolutionize Warfare’, Foreign Policy, 11 April 2023, https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/.61 See Paul Scharre, ‘AI’s Inhuman Advantage’, War on the Rocks, 10 April 2023, https://warontherocks.com/2023/04/ais-inhuman-advantage/.62 See Benjamin Weiser and Nate Schweber, ‘The ChatGPT Lawyer Explains Himself’, New York Times, 8 June 2023, https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. See also Stew Magnuson, ‘Just In: Pentagon’s Top AI Official Addresses ChatGPT’s Possible Benefits, Risks’, National Defense, 8 March 2023, https://www.nationaldefensemagazine.org/articles/2023/3/8/pentagons-top-ai-official-addresses-chatgpts-possible-benefits-risks.63 US Department of Defense, ‘DOD Announces Establishment of Generative AI Task Force’, 10 August 2023, https://www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/. See also Mohar Chatterjee, ‘Hackers in Vegas Take on AI’, Politico, 14 August 2023, https://www.politico.com/newsletters/digital-future-daily/2023/08/14/hackers-in-vegas-take-on-ai-00111145.64 Benjamin M. Jensen et al., ‘Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence’, International Studies Review, vol. 22, no. 3, September 2020, p. 537.65 See Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security, vol. 46, no. 3, Winter 2021/2022, pp. 7–50.66 See Paul Krugman, ‘AI May Change Everything, But Probably Not Too Quickly’, New York Times, 31 March 2023, https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html.67 Paul A. David, ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’, American Economic Review, vol. 80, no. 2, May 1990, p. 356.68 See Edward L. Katzenbach, Jr, ‘The Horse Cavalry in the Twentieth Century: A Study in Policy Response’, Public Policy, vol. 7, 1958, pp. 120–49.69 Jensen et al., ‘Algorithms at War’.70 Stephanie Carvin, ‘How Not to War’, International Affairs, vol. 98, no. 5, September 2022, pp. 1,695–716.71 Krystal Hu, ‘ChatGPT Sets Record for Fastest-growing User Base’, Reuters, 2 February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.72 See Michael C. Horowitz, The Diffusion of Military Power (Princeton, NJ: Princeton University Press, 2010).73 See Michael E. O’Hanlon, ‘The Plane Truth: Fewer F-22s Mean a Stronger National Defense’, Brookings Institution, 1 September 1999, https://www.brookings.edu/research/the-plane-truth-fewer-f-22s-mean-a-stronger-national-defense/.74 See, for example, Audrey Kurth Cronin, Power to the People: How Open Technological Innovation Is Arming Tomorrow’s Terrorists (Oxford: Oxford University Press, 2019); Ben FitzGerald and Jacqueline Parziale, ‘As Technology Goes Democratic, Nations Lose Military Control’, Bulletin of the Atomic Scientists, vol. 73, no. 2, 2017, pp. 102–7; and Emily O. Goldman and Leslie C. Eliason, The Diffusion of Military Technology and Ideas (Stanford, CA: Stanford University Press, 2003).75 Yonah Jeremy Bob, ‘IDF Will Run Entirely on Generative AI Within a Few Years – Israeli Cyber Chief’, Jerusalem Post, 28 June 2023, https://www.jpost.com/israel-news/defense-news/article-748028.76 See ‘Regulators Target Deepfakes’, Batch, 25 January 2023, https://www.deeplearning.ai/the-batch/chinas-new-law-limits-ai-generated-media/.77 See Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations’; and Adam Satariano, ‘Europeans Take a Major Step Toward Regulating AI’, New York Times, 14 June 2023, https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html.78 See Select Committee on Artificial Intelligence of the National Science and Technology Council, ‘National Artificial Intelligence Research and Development Strategic Plan 2023 Update’, May 2023, https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.79 See Michael D. Shear, Cecilia Kang and David E. Sanger, ‘Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools’, New York Times, 21 July 2023, https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html.80 The G7 also have announced the ‘Hiroshima AI Process’, an intergovernmental task force designed to investigate the risks of generative AI. The initiative aims to increase collaboration on topics such as governance, safeguarding intellectual-property rights, transparency, disinforma-tion and responsible use of AI technologies. How much influence it will have remains to be seen. See White House, ‘G7 Hiroshima Leaders’ Communiqué’, 20 May 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.81 See ‘Governance of Superintelligence’, OpenAI, 22 May 2023, https://openai.com/blog/governance-of-superintelligence; and Billy Perrigo, ‘Exclusive: OpenAI Lobbied the EU to Water Down AI Regulation’, Time, 20 June 2023, https://time.com/6288245/openai-eu-lobbying-ai-act/.82 See Cristiano Lima, ‘Google Bucks Calls for a New AI Regulator’, Washington Post, 13 June 2023, https://www.washingtonpost.com/politics/2023/06/13/google-bucks-calls-new-ai-regulator/.83 See ‘Why Tech Giants Want to Strangle AI with Red Tape’, The Economist, 25 May 2023, https://www.economist.com/business/2023/05/25/why-tech-giants-want-to-strangle-ai-with-red-tape; and Matteo Wong, ‘AI Doomerism Is a Decoy’, Atlantic, 2 June 2023, https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/.84 See Casey Fiesler, ‘AI Has Social Consequences, But Who Pays the Price?’, Conversation, 18 April 2023, https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375.85 Abeba Birhane and Deborah Raji, ‘ChatGPT, Galactica, and the Progress Trap’, Wired, 9 December 2022, https://www.wired.com/story/large-language-models-critique/.86 Paul Scharre, ‘AI’s Gatekeepers Aren’t Prepared for What’s Coming’, Foreign Policy, 19 June 2023, https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.87 See US Department of State, ‘Political Declaration of Responsible Military Use of Artificial Intelligence and Autonomy’, 16 February 2023, https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.88 See US Department of Defense, ‘DoD Announces Update to DoD Directive 3000.09’, 25 January 2023, https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.89 See Kahn, ‘Ground Rules for the Age of AI Warfare’.Additional informationNotes on contributorsSteven FeldsteinSteven Feldstein is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace and the author of The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (Oxford University Press, 2021). From 2014 to 2017, he served as US Deputy Assistant Secretary of State for Democracy, Human Rights, and Labor.","PeriodicalId":51535,"journal":{"name":"Survival","volume":"12 1","pages":"0"},"PeriodicalIF":1.5000,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Survival","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00396338.2023.2261260","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
AbstractThe potential impact of generative AI across politics, governance and war is enormous, and is the subject of considerable speculation informed by few hard facts. Yet it is possible to identify some major challenges. They include threats to democracies by privately controlled models that gain tremendous power to shape discourse and affect democratic deliberation; enhanced surveillance and propaganda dissemination by authoritarian regimes; new capacities for criminal and terrorist actors to carry out cyber attacks and related disruptions; and transformed war planning and military operations reflecting the accelerated dehumanisation of lethal force. While new innovations historically require time to take root, generative AI is likely to be adopted swiftly. Stakeholders must formulate pragmatic approaches to manage oncoming risks.Key words: Artificial intelligence (AI)chatbotsChatGPTcyber attackslarge language model (LLM)military planningpropagandasurveillance AcknowledgementsI would like to thank Tom Carothers, Matt O’Shaughnessy and Gavin Wilde for their valuable comments and feedback, and Brian (Chun Hey) Kot for his research assistance.Notes1 See Rishi Bommasani et al., ‘On the Opportunities and Risks of Foundation Models’, Center for Research on Foundational Models, Stanford University, 12 July 2022, https://crfm.stanford.edu/assets/report.pdf; and Helen Toner, ‘What Are Generative AI, Large Language Models, and Foundation Models?’, Center for Security and Emerging Technology, Georgetown University, May 2023, https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.2 See Kevin Roose, ‘How Does ChatGPT Really Work?’, New York Times, 28 March 2023, https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html.3 See Jordan Hoffmann et al., ‘An Empirical Analysis of Computeoptimal Large Language Model Training’, Google DeepMind, 12 April 2022, https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training; and Pranshu Verma and Kevin Schaul, ‘See Why AI Like ChatGPT Has Gotten So Good, So Fast’, Washington Post, 24 May 2023, https://www.washingtonpost.com/business/interactive/2023/artificial-intelligence-tech-rapid-advances/.4 See Tom B. Brown et al., ‘Language Models Are Few-shot Learners’, 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada, 22 July 2020, https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.5 See Lukas Esterle, ‘Deep Learning in Multiagent Systems’, in Alexandros Iosifidis and Anastasios Tefas (eds), Deep Learning for Robot Perception and Cognition (Cambridge, MA: Academic Press, 2022), pp. 435–60; and David Nield, ‘Supercharge Your ChatGPT Prompts with Auto-GPT’, Wired, 21 May 2023, https://www.wired.co.uk/article/chatgpt-prompts-auto-gpt. It is worth noting that the autonomy of an AI system sits on a spectrum, rather than being binary. While the goal of developers is to increase the ability of AI systems to complete increasingly complex tasks, this will be a slow evolution rather than a sudden jump in capabilities.6 See Chloe Xiang, ‘Developers Are Connecting Multiple AI Agents to Make More “Autonomous” AI’, Vice, 4 April 2023, https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai.7 See Mark Sullivan, ‘Auto-GPT and BabyAGI: How “Autonomous Agents” Are Bringing Generative AI to the Masses’, Fast Company, 13 April 2023, https://www.fastcompany.com/90880294/auto-gpt-and-babyagi-how-autonomous-agents-are-bringing-generative-ai-to-the-masses.8 See, for example, Josh Zumbrun, ‘Why ChatGPT Is Getting Dumber at Basic Math’, Wall Street Journal, 4 August 2023, https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0.9 See, for example, Tristan Bove, ‘Bill Gates Says that the A.I. Revolution Means Everyone Will Have Their Own “White Collar” Personal Assistant’, Fortune, 6 May 2023, https://fortune.com/2023/03/22/bill-gates-ai-work-productivity-personal-assistants-chatgpt/.10 Gary Marcus, ‘Senate Testimony’, US Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, 118th Congress, 16 May 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf.11 See Davey Alba, ‘OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails’, Bloomberg, 8 December 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.12 See Emily M. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, pp. 610–23, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.13 Hannes Bajohr, ‘Whoever Controls Language Models Controls Politics’, 8 April 2023, https://hannesbajohr.de/en/2023/04/08/whoever-controls-language-models-controls-politics/.14 Ibid.15 See Steven Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations: How Will This Influence Global Norms?’, Democratization, 2023, pp. 1–18.16 See Kayleen Devlin and Joshua Cheetham, ‘Fake Trump Arrest Photos: How to Spot an AI-generated Image’, BBC News, 24 March 2023, https://www.bbc.com/news/world-us-canada-65069316.17 ‘Beat Biden’, YouTube, 25 April 2023, https://www.youtube.com/watch?v=kLMMxgtxQ1Y. See also Isaac Stanley-Becker and John Wagner, ‘Republicans Counter Biden Announcement with Dystopian, AI-aided Video’, Washington Post, 25 April 2023, https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/.18 See Andrew R. Sorkin et al., ‘An A.I.generated Spoof Rattles the Markets’, New York Times, 23 May 2023, https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.19 See Josh A. Goldstein et al., ‘Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations’, January 2023, https://cdn.openai.com/papers/forecasting-misuse.pdf.20 See Thor Benson, ‘Brace Yourself for the 2024 Deepfake Election’, Wired, 27 April 2023, https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.21 Goldstein et al., ‘Generative Language Models and Automated Influence Operations’.22 Josh A. Goldstein and Girish Sastry, ‘The Coming Age of AI-powered Propaganda’, Foreign Affairs, 27 April 2023, https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda.23 See Ben M. Tappin et al., ‘Quantifying the Potential Persuasive Returns to Political Microtargeting’, Proceedings of the National Academy of Sciences, vol. 120, no. 25, June 2023, https://www.pnas.org/doi/10.1073/pnas.2216261120. The literature on disinformation is not settled about how much false online information impacts and undermines democracy. See, for example, Jon Bateman et al., ‘Measuring the Effects of Influence Operations: Key Findings and Gaps from Empirical Research’, Carnegie Endowment for International Peace – PCIO Baseline, 28 June 2021, https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824; and Joshua A. Tucker et al., ‘Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature’, Hewlett Foundation, 19 March 2018, https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/.24 See Nathan E. Sanders and Bruce Schneier, ‘How ChatGPT Hijacks Democracy’, New York Times, 15 January 2023, https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.25 See Sarah Kreps and Douglas Kriner, ‘How Generative AI Impacts Democratic Engagement’, Brookings Institution, 21 March 2023, https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/.26 See Steven Feldstein, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, September 2019, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847; Steven Feldstein, ‘How Artificial Intelligence Is Reshaping Repression’, Journal of Democracy, vol. 30, no. 1, January 2019, pp. 40–52; Steven Feldstein, The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (New York: Oxford University Press, 2021); Andrea Kendall-Taylor et al., ‘The Digital Dictators’, Foreign Affairs, vol. 99, no. 2, March/April 2020, pp. 103–15; and Nicholas Wright, ‘How Artificial Intelligence Will Reshape the Global Order’, Foreign Affairs, 10 July 2018, https://www.foreignaffairs.com/articles/world/2018-07-10/how-artificial-intelligence-will-reshape-global-order.27 Samantha Hoffman, ‘Programming China: The Communist Party’s Autonomic Approach to Managing State Security’, MERICS, 12 December 2017, https://merics.org/sites/default/files/2020-05/Programming%20China.pdf.28 Steven Feldstein, ‘The Global Struggle Over AI Surveillance’, National Endowment for Democracy, June 2022, https://www.ned.org/global-struggle-over-ai-surveillance-emerging-trends-democratic-responses/.29 See Dahlia Peterson, ‘How China Harnesses Data Fusion to Make Sense of Surveillance Data’, Brookings Institution, 23 September 2021, https://www.brookings.edu/articles/how-china-harnesses-data-fusion-to-make-sense-of-surveillance-data/.30 Cissy Zhou, ‘China Tells Big Tech Companies Not to Offer ChatGPT Services’, Nikkei Asia, 22 February 2023, https://asia.nikkei.com/Business/China-tech/China-tells-big-tech-companies-not-to-offer-ChatGPT-services. The list of countries in which ChatGPT is inaccessible, as of June 2023, predictably includes many authoritarian states, such as Afghanistan, China, Cuba, Iran, North Korea, Russia and Syria. Notably, Italy is also included on the list due to a ruling by its data-protection watchdog that OpenAI may be in breach of Europe’s privacy regulations. See Ryan Browne, ‘Italy Became the First Western Country to Ban ChatGPT. Here’s What Other Countries Are Doing’, CNBC, 4 April 2023, https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html; and Jon Martindale, ‘These Are the Countries Where ChatGPT Is Currently Banned’, Digital Trends, 12 April 2023, https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/.31 See Channing Lee, ‘From ChatGPT to Chat CCP: The Future of Generative AI Models in China’, Georgetown Security Studies Review, 3 March 2023, https://georgetownsecuritystudiesreview.org/2023/03/03/from-chatgpt-to-chat-ccp-the-future-of-generative-ai-models-in-china/.32 See Sophia Yang, ‘China’s ChatGPTstyle Bot ChatYuan Suspended Over Questions About Xi’, Taiwan News, 11 February 2023, https://www.taiwannews.com.tw/en/news/4807319. A Chinese CEO reportedly quipped that ‘China’s LLMs are not even allowed to count to 10, as that would include the numbers eight and nine – a reference to the state’s sensitivity about the number 89 and any discussion of the 1989 Tiananmen Square protests’. Quoted in Helen Toner et al., ‘The Illusion of China’s AI Prowess’, Foreign Affairs, 2 June 2023, https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation.33 See Paul Triolo, ‘ChatGPT and China: How to Think About Large Language Models and the Generative AI Race’, China Project, 12 April 2023, https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/.34 See Meaghan Tobin, ‘China Announces Rules to Keep AI Bound by “Core Socialist Values”’, Washington Post, 14 July 2023, https://www.washingtonpost.com/world/2023/07/14/china-ai-regulations-chatgpt-socialist/.35 See Helen Toner et al., ‘How Will China’s Generative AI Regulations Shape the Future? A DigiChina Forum’, DigiChina, Stanford University, 19 April 2023, https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.36 Toner et al., ‘The Illusion of China’s AI Prowess’.37 Training GPT-3 required 1.3 gigawatthours of electricity (equivalent to powering 121 homes in the United States for a year ) and cost $4.6m.The training costs for GPT-4 are far higher, likely exceeding $100m. See ‘Large, Creative AI Models Will Transform Lives and Labour Markets’, The Economist, 22 April 2023, https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work.38 See Lisa Barrington, ‘Abu Dhabi Makes Its Falcon 40B AI Model Open Source’, Reuters, 25 May 2023, https://www.reuters.com/technology/abu-dhabi-makes-its-falcon-40b-ai-model-open-source-2023-05-25/.39 See Cade Metz and Mike Isaac, ‘In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels’, New York Times, 18 May 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html.40 See, for example, Rebecca Tan, ‘Facebook Helped Bring Free Speech to Vietnam. Now It’s Helping Stifle It’, Washington Post, 19 June 2023, https://www.washingtonpost.com/world/2023/06/19/facebook-meta-vietnam-government-censorship/.41 See Catherine Stupp, ‘Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case’, Wall Street Journal, 30 August 2019, https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.42 Leah Nylen, ‘FTC’s Khan Says Enforcers Need to Be “Vigilant Early” with AI’, Bloomberg, 1 June 2023, https://www.bloomberg.com/news/articles/2023-06-02/ftc-s-khan-says-enforcers-need-to-be-vigilant-early-with-ai.43 See Matt Burgess, ‘The Hacking of ChatGPT Is Just Getting Started’, Wired, 13 April 2023, https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/; and Kyle Wiggers, ‘Can AI Really Be Protected from Text-based Attacks?’, TechCrunch, 24 February 2023, https://techcrunch.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/?guccounter=1.44 See Europol, ‘ChatGPT: The Impact of Large Language Models on Law Enforcement’, Tech Watch Flash Report from the Europol Innovation Lab, 27 March 2023, https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf.45 Ibid.46 See Andrew J. Lohn and Krystal A. Jackson, ‘Will AI Make Cyber Swords or Shields?’, Georgetown University’s Center for Security and Emerging Technology, August 2022, https://cset.georgetown.edu/wp-content/uploads/CSET-Will-AI-Make-Cyber-Swords-or-Shields.pdf.47 See Steven Feldstein and Brian Kot, ‘Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses’, Carnegie Endowment for International Peace, working paper, March 2023, https://carnegieendowment.org/2023/03/14/why-does-global-spyware-industry-continue-to-thrive-trends-explanations-and-responses-pub-89229.48 Ronald J. Deibert, ‘The Autocrat in Your iPhone’, Foreign Affairs, 12 December 2022, https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert.49 Europol, ‘ChatGPT’.50 See Thomas Gaulkin, ‘What Happened When WMD Experts Tried to Make the GPT-4 AI Do Bad Things’, Bulletin of the Atomic Scientists, 30 March 2023, https://thebulletin.org/2023/03/what-happened-when-wmd-experts-tried-to-make-the-gpt-4-ai-do-bad-things/.51 Lauren Kahn, ‘Ground Rules for the Age of AI Warfare’, Foreign Affairs, 6 June 2023, https://www.foreignaffairs.com/world/ground-rules-age-ai-warfare.52 See David Ignatius, ‘How the Algorithm Tipped the Balance in Ukraine’, Washington Post, 19 December 2022, https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/; and Kahn, ‘Ground Rules for the Age of AI Warfare’.53 See John Antal, 7 Seconds to Die: A Military Analysis of the Second Nagorno-Karabakh War and the Future of Warfighting (Philadelphia, PA: Casemate, 2022); and Kelsey Atherton, ‘Loitering Munitions Preview the Autonomous Future of Warfare’, Brookings Institution, 4 August 2021, https://www.brookings.edu/techstream/loitering-munitions-preview-the-autonomous-future-of-warfare/.54 See Benjamin Jensen and Dan Tadross, ‘How Large-language Models Can Revolutionize Military Planning’, War on the Rocks, 12 April 2023, https://warontherocks.com/2023/04/how-large-language-models-can-revolutionize-military-planning/.55 Alexander Karp, ‘Our New Platform – A Letter from the Chief Executive Officer’, Palantir, 7 April 2023, https://www.palantir.com/newsroom/letters/our-new-platform/.56 See Alexander Ward et al., ‘Trump: “Used to Talk About” Ukraine Invasion with Putin’, Politico, 11 May 2023, https://www.politico.com/newsletters/national-security-daily/2023/05/11/trump-used-to-talk-about-ukraine-invasion-with-putin-00096394.57 Ross Andersen, ‘Never Give Artificial Intelligence the Nuclear Codes’, Atlantic, June 2023, https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/.58 See Arthur Holland Michel, ‘Known Unknowns: Data Issues and Military Autonomous Systems’, UNIDIR, 17 May 2021, https://unidir.org/known-unknowns.59 Frederik Federspiel et al., ‘Threats by Artificial Intelligence to Human Health and Human Existence’, BMJ Global Health, vol. 8, no. 5, May 2023, e010435, https://doi.org/10.1136/bmjgh-2022-010435.60 See Michael Hirsh, ‘How AI Will Revolutionize Warfare’, Foreign Policy, 11 April 2023, https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/.61 See Paul Scharre, ‘AI’s Inhuman Advantage’, War on the Rocks, 10 April 2023, https://warontherocks.com/2023/04/ais-inhuman-advantage/.62 See Benjamin Weiser and Nate Schweber, ‘The ChatGPT Lawyer Explains Himself’, New York Times, 8 June 2023, https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. See also Stew Magnuson, ‘Just In: Pentagon’s Top AI Official Addresses ChatGPT’s Possible Benefits, Risks’, National Defense, 8 March 2023, https://www.nationaldefensemagazine.org/articles/2023/3/8/pentagons-top-ai-official-addresses-chatgpts-possible-benefits-risks.63 US Department of Defense, ‘DOD Announces Establishment of Generative AI Task Force’, 10 August 2023, https://www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/. See also Mohar Chatterjee, ‘Hackers in Vegas Take on AI’, Politico, 14 August 2023, https://www.politico.com/newsletters/digital-future-daily/2023/08/14/hackers-in-vegas-take-on-ai-00111145.64 Benjamin M. Jensen et al., ‘Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence’, International Studies Review, vol. 22, no. 3, September 2020, p. 537.65 See Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security, vol. 46, no. 3, Winter 2021/2022, pp. 7–50.66 See Paul Krugman, ‘AI May Change Everything, But Probably Not Too Quickly’, New York Times, 31 March 2023, https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html.67 Paul A. David, ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’, American Economic Review, vol. 80, no. 2, May 1990, p. 356.68 See Edward L. Katzenbach, Jr, ‘The Horse Cavalry in the Twentieth Century: A Study in Policy Response’, Public Policy, vol. 7, 1958, pp. 120–49.69 Jensen et al., ‘Algorithms at War’.70 Stephanie Carvin, ‘How Not to War’, International Affairs, vol. 98, no. 5, September 2022, pp. 1,695–716.71 Krystal Hu, ‘ChatGPT Sets Record for Fastest-growing User Base’, Reuters, 2 February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.72 See Michael C. Horowitz, The Diffusion of Military Power (Princeton, NJ: Princeton University Press, 2010).73 See Michael E. O’Hanlon, ‘The Plane Truth: Fewer F-22s Mean a Stronger National Defense’, Brookings Institution, 1 September 1999, https://www.brookings.edu/research/the-plane-truth-fewer-f-22s-mean-a-stronger-national-defense/.74 See, for example, Audrey Kurth Cronin, Power to the People: How Open Technological Innovation Is Arming Tomorrow’s Terrorists (Oxford: Oxford University Press, 2019); Ben FitzGerald and Jacqueline Parziale, ‘As Technology Goes Democratic, Nations Lose Military Control’, Bulletin of the Atomic Scientists, vol. 73, no. 2, 2017, pp. 102–7; and Emily O. Goldman and Leslie C. Eliason, The Diffusion of Military Technology and Ideas (Stanford, CA: Stanford University Press, 2003).75 Yonah Jeremy Bob, ‘IDF Will Run Entirely on Generative AI Within a Few Years – Israeli Cyber Chief’, Jerusalem Post, 28 June 2023, https://www.jpost.com/israel-news/defense-news/article-748028.76 See ‘Regulators Target Deepfakes’, Batch, 25 January 2023, https://www.deeplearning.ai/the-batch/chinas-new-law-limits-ai-generated-media/.77 See Feldstein, ‘Evaluating Europe’s Push to Enact AI Regulations’; and Adam Satariano, ‘Europeans Take a Major Step Toward Regulating AI’, New York Times, 14 June 2023, https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html.78 See Select Committee on Artificial Intelligence of the National Science and Technology Council, ‘National Artificial Intelligence Research and Development Strategic Plan 2023 Update’, May 2023, https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.79 See Michael D. Shear, Cecilia Kang and David E. Sanger, ‘Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools’, New York Times, 21 July 2023, https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html.80 The G7 also have announced the ‘Hiroshima AI Process’, an intergovernmental task force designed to investigate the risks of generative AI. The initiative aims to increase collaboration on topics such as governance, safeguarding intellectual-property rights, transparency, disinforma-tion and responsible use of AI technologies. How much influence it will have remains to be seen. See White House, ‘G7 Hiroshima Leaders’ Communiqué’, 20 May 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.81 See ‘Governance of Superintelligence’, OpenAI, 22 May 2023, https://openai.com/blog/governance-of-superintelligence; and Billy Perrigo, ‘Exclusive: OpenAI Lobbied the EU to Water Down AI Regulation’, Time, 20 June 2023, https://time.com/6288245/openai-eu-lobbying-ai-act/.82 See Cristiano Lima, ‘Google Bucks Calls for a New AI Regulator’, Washington Post, 13 June 2023, https://www.washingtonpost.com/politics/2023/06/13/google-bucks-calls-new-ai-regulator/.83 See ‘Why Tech Giants Want to Strangle AI with Red Tape’, The Economist, 25 May 2023, https://www.economist.com/business/2023/05/25/why-tech-giants-want-to-strangle-ai-with-red-tape; and Matteo Wong, ‘AI Doomerism Is a Decoy’, Atlantic, 2 June 2023, https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/.84 See Casey Fiesler, ‘AI Has Social Consequences, But Who Pays the Price?’, Conversation, 18 April 2023, https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375.85 Abeba Birhane and Deborah Raji, ‘ChatGPT, Galactica, and the Progress Trap’, Wired, 9 December 2022, https://www.wired.com/story/large-language-models-critique/.86 Paul Scharre, ‘AI’s Gatekeepers Aren’t Prepared for What’s Coming’, Foreign Policy, 19 June 2023, https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.87 See US Department of State, ‘Political Declaration of Responsible Military Use of Artificial Intelligence and Autonomy’, 16 February 2023, https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.88 See US Department of Defense, ‘DoD Announces Update to DoD Directive 3000.09’, 25 January 2023, https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.89 See Kahn, ‘Ground Rules for the Age of AI Warfare’.Additional informationNotes on contributorsSteven FeldsteinSteven Feldstein is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace and the author of The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (Oxford University Press, 2021). From 2014 to 2017, he served as US Deputy Assistant Secretary of State for Democracy, Human Rights, and Labor.
期刊介绍:
Survival, the Institute"s bi-monthly journal, is a leading forum for analysis and debate of international and strategic affairs. With a diverse range of authors, thoughtful reviews and review essays, Survival is scholarly in depth while vivid, well-written and policy-relevant in approach. Shaped by its editors to be both timely and forward-thinking, the journal encourages writers to challenge conventional wisdom and bring fresh, often controversial, perspectives to bear on the strategic issues of the moment. Survival is essential reading for practitioners, analysts, teachers and followers of international affairs. Each issue also contains Book Reviews of the most important recent publications on international politics and security.