Josh Rosenberg, Ezra Karger, Zach Jacobs, Molly Hickman, Avital Morris, Harrison Durland, Otto Kuusela, Philip E Tetlock
{"title":"Belief updating in AI-risk debates: Exploring the limits of adversarial collaboration.","authors":"Josh Rosenberg, Ezra Karger, Zach Jacobs, Molly Hickman, Avital Morris, Harrison Durland, Otto Kuusela, Philip E Tetlock","doi":"10.1111/risa.70023","DOIUrl":null,"url":null,"abstract":"<p><p>We organized adversarial collaborations between subject-matter experts and expert forecasters with opposing views on whether recent advances in Artificial Intelligence (AI) pose an existential threat to humanity in the 21st century. Two studies incentivized participants to engage in respectful perspective-taking, to share their strongest arguments, and to propose early-warning indicator questions (cruxes) for the probability of an AI-related catastrophe by 2100. AI experts saw greater threats from AI than did expert forecasters, and neither group changed its long-term risk estimates, but they did preregister cruxes whose resolution by 2030 would sway their views on long-term risk. These persistent differences shrank as questioning moved across centuries, from 2100 to 2500 and beyond, by which time both groups put the risk of extreme negative outcomes from AI at 30%-40%. Future research should address the generalizability of these results beyond our sample to alternative samples of experts, and beyond the topic area of AI to other questions and time frames.</p>","PeriodicalId":21472,"journal":{"name":"Risk Analysis","volume":" ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.70023","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
We organized adversarial collaborations between subject-matter experts and expert forecasters with opposing views on whether recent advances in Artificial Intelligence (AI) pose an existential threat to humanity in the 21st century. Two studies incentivized participants to engage in respectful perspective-taking, to share their strongest arguments, and to propose early-warning indicator questions (cruxes) for the probability of an AI-related catastrophe by 2100. AI experts saw greater threats from AI than did expert forecasters, and neither group changed its long-term risk estimates, but they did preregister cruxes whose resolution by 2030 would sway their views on long-term risk. These persistent differences shrank as questioning moved across centuries, from 2100 to 2500 and beyond, by which time both groups put the risk of extreme negative outcomes from AI at 30%-40%. Future research should address the generalizability of these results beyond our sample to alternative samples of experts, and beyond the topic area of AI to other questions and time frames.
期刊介绍:
Published on behalf of the Society for Risk Analysis, Risk Analysis is ranked among the top 10 journals in the ISI Journal Citation Reports under the social sciences, mathematical methods category, and provides a focal point for new developments in the field of risk analysis. This international peer-reviewed journal is committed to publishing critical empirical research and commentaries dealing with risk issues. The topics covered include:
• Human health and safety risks
• Microbial risks
• Engineering
• Mathematical modeling
• Risk characterization
• Risk communication
• Risk management and decision-making
• Risk perception, acceptability, and ethics
• Laws and regulatory policy
• Ecological risks.