Superintelligence: Fears, Promises and Potentials

B. Goertzel
{"title":"Superintelligence: Fears, Promises and Potentials","authors":"B. Goertzel","doi":"10.55613/jeet.v25i2.48","DOIUrl":null,"url":null,"abstract":"Oxford philosopher Nick Bostrom, in his recent and celebrated book Superintelligence, argues that advanced AI poses a potentially major existential risk to humanity, and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail, and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute (formerly Singularity Institute for AI), and David Weinbaum (Weaver) and Viktoras Veitas of the Global Brain Institute. \n  \nRelevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed, and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However, Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form, making clearer the essence of what is being claimed. For instance, the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky, with many of the same practical conclusions (e.g. that it may well be best if advanced AI is developed in secret by a small elite group). \n  \nBostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence, viewing it as centered not on reward maximization, but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing, in only partially knowable ways. \n  \nIt is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way. For instance, formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire, are often informally discussed as if they demonstrated the likelihood, rather than just the possibility, of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general, rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past, current, and future intelligence as “open-ended,” in the vernacular of Weaver and Veitas, the potential dangers no longer appear to loom so large, and one sees a future that is wide-open, complex and uncertain, just as it has always been.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Ethics and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.55613/jeet.v25i2.48","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Oxford philosopher Nick Bostrom, in his recent and celebrated book Superintelligence, argues that advanced AI poses a potentially major existential risk to humanity, and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail, and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute (formerly Singularity Institute for AI), and David Weinbaum (Weaver) and Viktoras Veitas of the Global Brain Institute.   Relevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed, and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However, Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form, making clearer the essence of what is being claimed. For instance, the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky, with many of the same practical conclusions (e.g. that it may well be best if advanced AI is developed in secret by a small elite group).   Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence, viewing it as centered not on reward maximization, but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing, in only partially knowable ways.   It is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way. For instance, formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire, are often informally discussed as if they demonstrated the likelihood, rather than just the possibility, of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general, rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past, current, and future intelligence as “open-ended,” in the vernacular of Weaver and Veitas, the potential dangers no longer appear to loom so large, and one sees a future that is wide-open, complex and uncertain, just as it has always been.
超级智能:恐惧、承诺和潜力
牛津大学哲学家尼克·博斯特罗姆(Nick Bostrom)在他最近出版的著名著作《超级智能》(Superintelligence)中认为,先进的人工智能对人类的生存构成了潜在的重大风险,先进的人工智能开发应该受到严格监管,甚至可能仅限于一小部分政府批准的研究人员。书中对博斯特罗姆的观点和观点进行了详细的回顾和探讨,并与其他三位关于人工智能本质和含义的思想家进行了比较:机器智能研究所(原人工智能奇点研究所)的埃利泽·尤德科夫斯基,以及全球大脑研究所的大卫·温鲍姆(韦弗)和维克多拉斯·维塔斯。对尤德科夫斯基《理性:从AI到僵尸》一书的相关部分进行简要回顾,发现博斯特罗姆著作的核心思想几乎都在尤德科夫斯基的思想之前或同时出现过。然而,尤德科夫斯基经常以一种更直白和极端的形式呈现这些共同的观点,使所主张的本质更加清晰。例如,人们在《博斯特罗姆》的背景中看到的精英主义思维,在尤德科夫斯基身上得到了清晰而公开的表达,并得出了许多相同的实际结论(例如,如果先进的人工智能由一小群精英秘密开发,那可能是最好的)。博斯特罗姆和尤德科夫斯基通过强化学习的视角来看待智能系统——他们将智能系统视为“奖励最大化者”,并担心当一个非常强大和智能的奖励最大化者与一个目标系统相结合时会发生什么,这个目标系统会为实现愚蠢的目标(比如用回形针贴满宇宙)提供奖励。Weinbaum和Veitas最近发表的论文《开放式智能》(Open-Ended Intelligence)提出了一种截然不同的智能观点,认为智能不是以奖励最大化为中心,而是以复杂的自组织和自我超越的发展为中心,这种发展与复杂的环境密切相关,这种环境也在以部分可知的方式进行自我组织。结论是,博斯特罗姆和尤德科夫斯基关于存在风险的论点有一定的逻辑基础,但往往以一种夸张的方式提出。例如,那些暗示先进人工智能发展的“最坏情况”是极其可怕的正式论点,经常被非正式地讨论,就好像它们证明了高度负面结果的可能性,而不仅仅是可能性。奖励最大化人工智能的潜在危险被视为人工智能的一般问题,而不仅仅是作为构建超级智能方法的奖励最大化范式的问题。用韦弗和维塔斯的话说,如果一个人把过去、现在和未来的智能看作是“开放式的”,那么潜在的危险就不再显得那么大,他会看到一个开放、复杂和不确定的未来,就像它一直以来的那样。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信