Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi
{"title":"人工智能意识与公众认知:四种未来","authors":"Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi","doi":"arxiv-2408.04771","DOIUrl":null,"url":null,"abstract":"The discourse on risks from advanced AI systems (\"AIs\") typically focuses on\nmisuse, accidents and loss of control, but the question of AIs' moral status\ncould have negative impacts which are of comparable significance and could be\nrealised within similar timeframes. Our paper evaluates these impacts by\ninvestigating (1) the factual question of whether future advanced AI systems\nwill be conscious, together with (2) the epistemic question of whether future\nhuman society will broadly believe advanced AI systems to be conscious.\nAssuming binary responses to (1) and (2) gives rise to four possibilities: in\nthe true positive scenario, society predominantly correctly believes that AIs\nare conscious; in the false positive scenario, that belief is incorrect; in the\ntrue negative scenario, society correctly believes that AIs are not conscious;\nand lastly, in the false negative scenario, society incorrectly believes that\nAIs are not conscious. The paper offers vivid vignettes of the different\nfutures to ground the two-dimensional framework. Critically, we identify four\nmajor risks: AI suffering, human disempowerment, geopolitical instability, and\nhuman depravity. We evaluate each risk across the different scenarios and\nprovide an overall qualitative risk assessment for each scenario. Our analysis\nsuggests that the worst possibility is the wrong belief that AI is\nnon-conscious, followed by the wrong belief that AI is conscious. The paper\nconcludes with the main recommendations to avoid research aimed at\nintentionally creating conscious AI and instead focus efforts on reducing our\ncurrent uncertainties on both the factual and epistemic questions on AI\nconsciousness.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI Consciousness and Public Perceptions: Four Futures\",\"authors\":\"Ines Fernandez, Nicoleta Kyosovska, Jay Luong, Gabriel Mukobi\",\"doi\":\"arxiv-2408.04771\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The discourse on risks from advanced AI systems (\\\"AIs\\\") typically focuses on\\nmisuse, accidents and loss of control, but the question of AIs' moral status\\ncould have negative impacts which are of comparable significance and could be\\nrealised within similar timeframes. Our paper evaluates these impacts by\\ninvestigating (1) the factual question of whether future advanced AI systems\\nwill be conscious, together with (2) the epistemic question of whether future\\nhuman society will broadly believe advanced AI systems to be conscious.\\nAssuming binary responses to (1) and (2) gives rise to four possibilities: in\\nthe true positive scenario, society predominantly correctly believes that AIs\\nare conscious; in the false positive scenario, that belief is incorrect; in the\\ntrue negative scenario, society correctly believes that AIs are not conscious;\\nand lastly, in the false negative scenario, society incorrectly believes that\\nAIs are not conscious. The paper offers vivid vignettes of the different\\nfutures to ground the two-dimensional framework. Critically, we identify four\\nmajor risks: AI suffering, human disempowerment, geopolitical instability, and\\nhuman depravity. We evaluate each risk across the different scenarios and\\nprovide an overall qualitative risk assessment for each scenario. Our analysis\\nsuggests that the worst possibility is the wrong belief that AI is\\nnon-conscious, followed by the wrong belief that AI is conscious. The paper\\nconcludes with the main recommendations to avoid research aimed at\\nintentionally creating conscious AI and instead focus efforts on reducing our\\ncurrent uncertainties on both the factual and epistemic questions on AI\\nconsciousness.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04771\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI Consciousness and Public Perceptions: Four Futures
The discourse on risks from advanced AI systems ("AIs") typically focuses on
misuse, accidents and loss of control, but the question of AIs' moral status
could have negative impacts which are of comparable significance and could be
realised within similar timeframes. Our paper evaluates these impacts by
investigating (1) the factual question of whether future advanced AI systems
will be conscious, together with (2) the epistemic question of whether future
human society will broadly believe advanced AI systems to be conscious.
Assuming binary responses to (1) and (2) gives rise to four possibilities: in
the true positive scenario, society predominantly correctly believes that AIs
are conscious; in the false positive scenario, that belief is incorrect; in the
true negative scenario, society correctly believes that AIs are not conscious;
and lastly, in the false negative scenario, society incorrectly believes that
AIs are not conscious. The paper offers vivid vignettes of the different
futures to ground the two-dimensional framework. Critically, we identify four
major risks: AI suffering, human disempowerment, geopolitical instability, and
human depravity. We evaluate each risk across the different scenarios and
provide an overall qualitative risk assessment for each scenario. Our analysis
suggests that the worst possibility is the wrong belief that AI is
non-conscious, followed by the wrong belief that AI is conscious. The paper
concludes with the main recommendations to avoid research aimed at
intentionally creating conscious AI and instead focus efforts on reducing our
current uncertainties on both the factual and epistemic questions on AI
consciousness.