人工智能意识与公众认知:四种未来

Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi
{"title":"人工智能意识与公众认知:四种未来","authors":"Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi","doi":"arxiv-2408.04771","DOIUrl":null,"url":null,"abstract":"The discourse on risks from advanced AI systems (\"AIs\") typically focuses on\nmisuse, accidents and loss of control, but the question of AIs' moral status\ncould have negative impacts which are of comparable significance and could be\nrealised within similar timeframes. Our paper evaluates these impacts by\ninvestigating (1) the factual question of whether future advanced AI systems\nwill be conscious, together with (2) the epistemic question of whether future\nhuman society will broadly believe advanced AI systems to be conscious.\nAssuming binary responses to (1) and (2) gives rise to four possibilities: in\nthe true positive scenario, society predominantly correctly believes that AIs\nare conscious; in the false positive scenario, that belief is incorrect; in the\ntrue negative scenario, society correctly believes that AIs are not conscious;\nand lastly, in the false negative scenario, society incorrectly believes that\nAIs are not conscious. The paper offers vivid vignettes of the different\nfutures to ground the two-dimensional framework. Critically, we identify four\nmajor risks: AI suffering, human disempowerment, geopolitical instability, and\nhuman depravity. We evaluate each risk across the different scenarios and\nprovide an overall qualitative risk assessment for each scenario. Our analysis\nsuggests that the worst possibility is the wrong belief that AI is\nnon-conscious, followed by the wrong belief that AI is conscious. The paper\nconcludes with the main recommendations to avoid research aimed at\nintentionally creating conscious AI and instead focus efforts on reducing our\ncurrent uncertainties on both the factual and epistemic questions on AI\nconsciousness.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI Consciousness and Public Perceptions: Four Futures\",\"authors\":\"Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi\",\"doi\":\"arxiv-2408.04771\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The discourse on risks from advanced AI systems (\\\"AIs\\\") typically focuses on\\nmisuse, accidents and loss of control, but the question of AIs' moral status\\ncould have negative impacts which are of comparable significance and could be\\nrealised within similar timeframes. Our paper evaluates these impacts by\\ninvestigating (1) the factual question of whether future advanced AI systems\\nwill be conscious, together with (2) the epistemic question of whether future\\nhuman society will broadly believe advanced AI systems to be conscious.\\nAssuming binary responses to (1) and (2) gives rise to four possibilities: in\\nthe true positive scenario, society predominantly correctly believes that AIs\\nare conscious; in the false positive scenario, that belief is incorrect; in the\\ntrue negative scenario, society correctly believes that AIs are not conscious;\\nand lastly, in the false negative scenario, society incorrectly believes that\\nAIs are not conscious. The paper offers vivid vignettes of the different\\nfutures to ground the two-dimensional framework. Critically, we identify four\\nmajor risks: AI suffering, human disempowerment, geopolitical instability, and\\nhuman depravity. We evaluate each risk across the different scenarios and\\nprovide an overall qualitative risk assessment for each scenario. Our analysis\\nsuggests that the worst possibility is the wrong belief that AI is\\nnon-conscious, followed by the wrong belief that AI is conscious. The paper\\nconcludes with the main recommendations to avoid research aimed at\\nintentionally creating conscious AI and instead focus efforts on reducing our\\ncurrent uncertainties on both the factual and epistemic questions on AI\\nconsciousness.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04771\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

关于先进人工智能系统("AIs")风险的讨论通常集中在误用、事故和失控上,但人工智能的道德地位问题可能会产生负面影响,其重要性不相上下,而且可能在类似的时间框架内实现。我们的论文通过研究(1)未来先进的人工智能系统是否有意识这一事实问题,以及(2)未来人类社会是否会普遍认为先进的人工智能系统有意识这一认识论问题,来评估这些影响。假设对(1)和(2)做出二元回答,就会产生四种可能性:在真正的积极情景中,社会主要正确地相信人工智能是有意识的;在错误的积极情景中,这种信念是不正确的;在真正的消极情景中,社会正确地相信人工智能是没有意识的;最后,在错误的消极情景中,社会错误地相信人工智能是没有意识的。本文提供了不同情景的生动案例,为二维框架奠定了基础。重要的是,我们确定了四大风险:人工智能的痛苦、人类的无能、地缘政治的不稳定和人类的堕落。我们评估了不同情景下的每种风险,并为每种情景提供了总体定性风险评估。我们的分析表明,最坏的可能性是错误地认为人工智能没有意识,其次是错误地认为人工智能有意识。本文最后提出了主要建议,即避免旨在有意创造有意识人工智能的研究,而应集中精力减少我们目前在人工智能意识的事实和认识问题上的不确定性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI Consciousness and Public Perceptions: Four Futures
The discourse on risks from advanced AI systems ("AIs") typically focuses on misuse, accidents and loss of control, but the question of AIs' moral status could have negative impacts which are of comparable significance and could be realised within similar timeframes. Our paper evaluates these impacts by investigating (1) the factual question of whether future advanced AI systems will be conscious, together with (2) the epistemic question of whether future human society will broadly believe advanced AI systems to be conscious. Assuming binary responses to (1) and (2) gives rise to four possibilities: in the true positive scenario, society predominantly correctly believes that AIs are conscious; in the false positive scenario, that belief is incorrect; in the true negative scenario, society correctly believes that AIs are not conscious; and lastly, in the false negative scenario, society incorrectly believes that AIs are not conscious. The paper offers vivid vignettes of the different futures to ground the two-dimensional framework. Critically, we identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity. We evaluate each risk across the different scenarios and provide an overall qualitative risk assessment for each scenario. Our analysis suggests that the worst possibility is the wrong belief that AI is non-conscious, followed by the wrong belief that AI is conscious. The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI and instead focus efforts on reducing our current uncertainties on both the factual and epistemic questions on AI consciousness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信