安全人工超级智能的熵边界条件

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Santiago Núñez Corrales, E. Jakobsson
{"title":"安全人工超级智能的熵边界条件","authors":"Santiago Núñez Corrales, E. Jakobsson","doi":"10.1080/0952813X.2021.1952653","DOIUrl":null,"url":null,"abstract":"ABSTRACT Artificial superintelligent (ASI) agents that will not cause harm to humans or other organisms are central to mitigating a growing contemporary global safety concern as artificial intelligent agents become more sophisticated. We argue that it is not necessary to resort to implementing an explicit theory of ethics, and that doing so may entail intractable difficulties and unacceptable risks. We attempt to provide some insight into the matter by defining a minimal set of boundary conditions potentially capable of decreasing the probability of conflict with synthetic intellects intended to prevent aggression towards organisms. Our argument departs from causal entropic forces as good general predictors of future action in ASI agents. We reason that maximising future freedom of action implies reducing the amount of repeated computation needed to find good solutions to a large number of problems, for which living systems are good exemplars: a safe ASI should find living organisms intrinsically valuable. We describe empirically-bounded ASI agents whose actions are constrained by the character of physical laws and their own evolutionary history as emerging from H. sapiens, conceptually and memetically, if not genetically. Plausible consequences and practical concerns for experimentation are characterised, and implications for life in the universe are discussed.","PeriodicalId":15677,"journal":{"name":"Journal of Experimental & Theoretical Artificial Intelligence","volume":"65 1","pages":"1 - 33"},"PeriodicalIF":1.7000,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Entropic boundary conditions towards safe artificial superintelligence\",\"authors\":\"Santiago Núñez Corrales, E. Jakobsson\",\"doi\":\"10.1080/0952813X.2021.1952653\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Artificial superintelligent (ASI) agents that will not cause harm to humans or other organisms are central to mitigating a growing contemporary global safety concern as artificial intelligent agents become more sophisticated. We argue that it is not necessary to resort to implementing an explicit theory of ethics, and that doing so may entail intractable difficulties and unacceptable risks. We attempt to provide some insight into the matter by defining a minimal set of boundary conditions potentially capable of decreasing the probability of conflict with synthetic intellects intended to prevent aggression towards organisms. Our argument departs from causal entropic forces as good general predictors of future action in ASI agents. We reason that maximising future freedom of action implies reducing the amount of repeated computation needed to find good solutions to a large number of problems, for which living systems are good exemplars: a safe ASI should find living organisms intrinsically valuable. We describe empirically-bounded ASI agents whose actions are constrained by the character of physical laws and their own evolutionary history as emerging from H. sapiens, conceptually and memetically, if not genetically. Plausible consequences and practical concerns for experimentation are characterised, and implications for life in the universe are discussed.\",\"PeriodicalId\":15677,\"journal\":{\"name\":\"Journal of Experimental & Theoretical Artificial Intelligence\",\"volume\":\"65 1\",\"pages\":\"1 - 33\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2021-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Experimental & Theoretical Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/0952813X.2021.1952653\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental & Theoretical Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0952813X.2021.1952653","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能代理变得越来越复杂,不会对人类或其他生物造成伤害的人工超智能(ASI)代理对于缓解日益增长的当代全球安全问题至关重要。我们认为,没有必要诉诸于实施明确的伦理理论,这样做可能会带来棘手的困难和不可接受的风险。我们试图通过定义一组最小的边界条件来提供一些关于这个问题的见解,这些条件有可能降低与合成智能发生冲突的可能性,目的是防止对生物体的攻击。我们的论点脱离了因果熵力作为ASI代理人未来行动的良好一般预测因素。我们认为,最大化未来的行动自由意味着减少为大量问题寻找良好解决方案所需的重复计算量,生命系统就是很好的例子:一个安全的人工智能应该发现生命有机体的内在价值。我们描述了经验有限的ASI代理人,他们的行为受到物理定律的特征和他们自己的进化史的约束,从概念上和模因上,如果不是遗传上,从智人出现。描述了实验的合理结果和实际问题,并讨论了对宇宙生命的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Entropic boundary conditions towards safe artificial superintelligence
ABSTRACT Artificial superintelligent (ASI) agents that will not cause harm to humans or other organisms are central to mitigating a growing contemporary global safety concern as artificial intelligent agents become more sophisticated. We argue that it is not necessary to resort to implementing an explicit theory of ethics, and that doing so may entail intractable difficulties and unacceptable risks. We attempt to provide some insight into the matter by defining a minimal set of boundary conditions potentially capable of decreasing the probability of conflict with synthetic intellects intended to prevent aggression towards organisms. Our argument departs from causal entropic forces as good general predictors of future action in ASI agents. We reason that maximising future freedom of action implies reducing the amount of repeated computation needed to find good solutions to a large number of problems, for which living systems are good exemplars: a safe ASI should find living organisms intrinsically valuable. We describe empirically-bounded ASI agents whose actions are constrained by the character of physical laws and their own evolutionary history as emerging from H. sapiens, conceptually and memetically, if not genetically. Plausible consequences and practical concerns for experimentation are characterised, and implications for life in the universe are discussed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.10
自引率
4.50%
发文量
89
审稿时长
>12 weeks
期刊介绍: Journal of Experimental & Theoretical Artificial Intelligence (JETAI) is a world leading journal dedicated to publishing high quality, rigorously reviewed, original papers in artificial intelligence (AI) research. The journal features work in all subfields of AI research and accepts both theoretical and applied research. Topics covered include, but are not limited to, the following: • cognitive science • games • learning • knowledge representation • memory and neural system modelling • perception • problem-solving
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信