The dangers of generative artificial intelligence

Q2 Economics, Econometrics and Finance
Luke Tredinnick, Claire Laybats
{"title":"The dangers of generative artificial intelligence","authors":"Luke Tredinnick, Claire Laybats","doi":"10.1177/02663821231183756","DOIUrl":null,"url":null,"abstract":"2023 looks set to become the year that anxieties about the risks posed by artificial intelligence (AI) escape from their safe confines in techno-sociological debates into the wider public consciousness. Barely a week has gone by without a new warning about the threat of AI and the potentially dire consequences of emergent technology. In the last month alone dozens of stories have appeared in the world’s press. Tech leaders and academics issued a statement warning that AI poses a risk of human extinction and should be treated as “a global priority alongside other societal-scale risks such as pandemics and nuclear war” (Centre for AI Safety, 2023). Professor Stuart Russell was reported as stating that “if we don’t control our own civilisation, we have no say in whether we continue to exist.” (Taylor, 2023). An article in BMJ Global Health was published warning of the existential threat of AI (Federspiel et al., 2023). Geoffrey Hinton – widely described as the “godfather of AI” warned of a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control” (Allyn, 2023). A simulated trial of AI drones was reported to have developed “highly unexpected strategies” included “killing” its operators to allow it to complete its mission (Guardian, 2023). On top of this have been hundreds of opinion articles and other news items addressing the risk of AI. The average news junkie could be forgiven for thinking that the technological singularity – a longstanding fear about runaway AI driven technological advancement is only weeks or months away. This sudden panic about the future of AI is in large part a product of the success of large language models and emerging forms of generative AI particularly in music and image creation. There is something uncanny about the apparent human-level of understanding of the latest generative AI technologies, which can respond with remarkable prescience to often quite vague requests and generate apparently spontaneous and humanly meaningful outputs. Interacting with ChatGPT can give the impression of communication with a conscious and self-aware machine. But this experience reveals more about what it means to be human, that it does about the abilities of technology. We are predisposed to perceive motivation and understanding in the acts of others, and generative AI has reached the point where it can trick us now and then into seeing motivations that are not there. Fortunately the current threat of AI is vastly overstated and the technological singularity remains a distant theoretical danger. We are not really significantly closer to the emergence of Artificial General Intelligence, and however uncanny the experience of interacting with large language models, they remain resolutely dumb, lacking anything that can be interpreted as true understanding. But while the current generation of AI is are not about to develop autonomous dangerous behaviours, nevertheless it does present new challenges for regulation, law, and professional practice. These challenges include:","PeriodicalId":39735,"journal":{"name":"Business Information Review","volume":"40 1","pages":"46 - 48"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Business Information Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02663821231183756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Economics, Econometrics and Finance","Score":null,"Total":0}
引用次数: 1

Abstract

2023 looks set to become the year that anxieties about the risks posed by artificial intelligence (AI) escape from their safe confines in techno-sociological debates into the wider public consciousness. Barely a week has gone by without a new warning about the threat of AI and the potentially dire consequences of emergent technology. In the last month alone dozens of stories have appeared in the world’s press. Tech leaders and academics issued a statement warning that AI poses a risk of human extinction and should be treated as “a global priority alongside other societal-scale risks such as pandemics and nuclear war” (Centre for AI Safety, 2023). Professor Stuart Russell was reported as stating that “if we don’t control our own civilisation, we have no say in whether we continue to exist.” (Taylor, 2023). An article in BMJ Global Health was published warning of the existential threat of AI (Federspiel et al., 2023). Geoffrey Hinton – widely described as the “godfather of AI” warned of a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control” (Allyn, 2023). A simulated trial of AI drones was reported to have developed “highly unexpected strategies” included “killing” its operators to allow it to complete its mission (Guardian, 2023). On top of this have been hundreds of opinion articles and other news items addressing the risk of AI. The average news junkie could be forgiven for thinking that the technological singularity – a longstanding fear about runaway AI driven technological advancement is only weeks or months away. This sudden panic about the future of AI is in large part a product of the success of large language models and emerging forms of generative AI particularly in music and image creation. There is something uncanny about the apparent human-level of understanding of the latest generative AI technologies, which can respond with remarkable prescience to often quite vague requests and generate apparently spontaneous and humanly meaningful outputs. Interacting with ChatGPT can give the impression of communication with a conscious and self-aware machine. But this experience reveals more about what it means to be human, that it does about the abilities of technology. We are predisposed to perceive motivation and understanding in the acts of others, and generative AI has reached the point where it can trick us now and then into seeing motivations that are not there. Fortunately the current threat of AI is vastly overstated and the technological singularity remains a distant theoretical danger. We are not really significantly closer to the emergence of Artificial General Intelligence, and however uncanny the experience of interacting with large language models, they remain resolutely dumb, lacking anything that can be interpreted as true understanding. But while the current generation of AI is are not about to develop autonomous dangerous behaviours, nevertheless it does present new challenges for regulation, law, and professional practice. These challenges include:
生成式人工智能的危险
2023年似乎将成为对人工智能带来的风险的焦虑从技术社会学辩论中的安全范围转移到更广泛公众意识中的一年。几乎一周过去了,人们都对人工智能的威胁和新兴技术的潜在可怕后果发出了新的警告。仅在上个月,世界新闻界就出现了数十篇报道。科技领袖和学者发表声明警告称,人工智能构成了人类灭绝的风险,应与流行病和核战争等其他社会规模的风险一起被视为“全球优先事项”(人工智能安全中心,2023)。据报道,斯图尔特·拉塞尔教授曾表示,“如果我们不控制自己的文明,我们就对是否继续存在没有发言权。”(泰勒,2023)。《英国医学杂志全球健康》发表了一篇文章,警告人工智能的生存威胁(Federspiel等人,2023)。杰弗里·辛顿(Geoffrey Hinton)被广泛称为“人工智能教父”,他警告说,“我们很快就会得到比我们更聪明的东西,这些东西可能会有不良动机并占据控制权,这是一种严重的危险”(Allyn,2023)。据报道,人工智能无人机的模拟试验开发了“非常出乎意料的策略”,包括“杀死”其操作员,使其能够完成任务(卫报,2023)。除此之外,还有数百篇关于人工智能风险的观点文章和其他新闻。普通新闻迷认为,技术奇点——对人工智能驱动的技术进步失控的长期恐惧——只有几周或几个月的时间了,这是可以原谅的。这种对人工智能未来的突然恐慌在很大程度上是大型语言模型和新兴形式的生成性人工智能成功的产物,尤其是在音乐和图像创作方面。人类对最新的生成性人工智能技术的理解水平有点不可思议,它可以以非凡的先见之明回应往往相当模糊的请求,并产生明显自发的、有人类意义的输出。与ChatGPT交互可以给人一种与有意识和自我意识的机器交流的印象。但这段经历更多地揭示了作为人类意味着什么,以及技术的能力。我们倾向于在他人的行为中感知动机和理解,而生成性人工智能已经到了可以不时欺骗我们看到不存在的动机的地步。幸运的是,人工智能目前的威胁被大大夸大了,技术奇点仍然是一个遥远的理论危险。我们离通用人工智能的出现并没有太近,无论与大型语言模型交互的经历多么不可思议,它们仍然非常愚蠢,缺乏任何可以被解释为真正理解的东西。但是,尽管当前一代人工智能不会发展出自主的危险行为,但它确实给监管、法律和专业实践带来了新的挑战。这些挑战包括:
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Business Information Review
Business Information Review Economics, Econometrics and Finance-Economics, Econometrics and Finance (miscellaneous)
CiteScore
2.50
自引率
0.00%
发文量
22
期刊介绍: Business Information Review (BIR) is concerned with information and knowledge management within organisations. To be successful organisations need to gain maximum value from exploiting relevant information and knowledge. BIR deals with information strategies and operational good practice across the range of activities required to deliver this information dividend. The journal aims to highlight developments in the economic, social and technological landscapes that will impact the way organisations operate. BIR also provides insights into the factors that contribute to individual professional success.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信