Mapping public perception of artificial intelligence: Expectations, risk–benefit tradeoffs, and value as determinants for societal acceptance

IF 13.3 1区 管理学 Q1 BUSINESS
Philipp Brauner, Felix Glawe, Gian Luca Liehner, Luisa Vervier, Martina Ziefle
{"title":"Mapping public perception of artificial intelligence: Expectations, risk–benefit tradeoffs, and value as determinants for societal acceptance","authors":"Philipp Brauner,&nbsp;Felix Glawe,&nbsp;Gian Luca Liehner,&nbsp;Luisa Vervier,&nbsp;Martina Ziefle","doi":"10.1016/j.techfore.2025.124304","DOIUrl":null,"url":null,"abstract":"<div><div>Public opinion on artificial intelligence (AI) plays a pivotal role in shaping trust and AI alignment, ethical adoption, and the development equitable policy frameworks. This study investigates expectations, risk–benefit tradeoffs, and value assessments as determinants of societal acceptance of AI. Using a nationally representative sample (N = 1100) from Germany, we examined mental models of AI and potential biases. Participants evaluated 71 AI-related scenarios across domains such as autonomous driving, medical care, art, politics, warfare, and societal divides, assessing their expected likelihood, perceived risks, benefits, and overall value. We present ranked evaluations alongside visual mappings illustrating the risk–benefit tradeoffs. Our findings suggest that while many scenarios were considered likely, they were often associated with high risks, limited benefits, and low overall value. Regression analyses revealed that 96.5% (<span><math><mrow><msup><mrow><mi>r</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>=</mo><mn>0</mn><mo>.</mo><mn>965</mn></mrow></math></span>) of the variance in value judgments was explained by risks (<span><math><mrow><mi>β</mi><mo>=</mo><mo>−</mo><mn>0</mn><mo>.</mo><mn>490</mn></mrow></math></span>) and, more strongly, benefits (<span><math><mrow><mi>β</mi><mo>=</mo><mo>+</mo><mn>0</mn><mo>.</mo><mn>672</mn></mrow></math></span>), with no significant relationship to expected likelihood. Demographics and personality traits, including age, gender, and AI readiness, influenced perceptions, highlighting the need for targeted AI literacy initiatives. These findings offer actionable insights for researchers, developers, and policymakers, highlighting the need to communicate tangible benefits and address public concerns to foster responsible and inclusive AI adoption. Future research should explore cross-cultural differences and longitudinal changes in public perception to inform global AI governance.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"220 ","pages":"Article 124304"},"PeriodicalIF":13.3000,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S004016252500335X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

Public opinion on artificial intelligence (AI) plays a pivotal role in shaping trust and AI alignment, ethical adoption, and the development equitable policy frameworks. This study investigates expectations, risk–benefit tradeoffs, and value assessments as determinants of societal acceptance of AI. Using a nationally representative sample (N = 1100) from Germany, we examined mental models of AI and potential biases. Participants evaluated 71 AI-related scenarios across domains such as autonomous driving, medical care, art, politics, warfare, and societal divides, assessing their expected likelihood, perceived risks, benefits, and overall value. We present ranked evaluations alongside visual mappings illustrating the risk–benefit tradeoffs. Our findings suggest that while many scenarios were considered likely, they were often associated with high risks, limited benefits, and low overall value. Regression analyses revealed that 96.5% (r2=0.965) of the variance in value judgments was explained by risks (β=0.490) and, more strongly, benefits (β=+0.672), with no significant relationship to expected likelihood. Demographics and personality traits, including age, gender, and AI readiness, influenced perceptions, highlighting the need for targeted AI literacy initiatives. These findings offer actionable insights for researchers, developers, and policymakers, highlighting the need to communicate tangible benefits and address public concerns to foster responsible and inclusive AI adoption. Future research should explore cross-cultural differences and longitudinal changes in public perception to inform global AI governance.

Abstract Image

绘制公众对人工智能的看法:期望、风险-收益权衡以及作为社会接受度决定因素的价值
关于人工智能(AI)的公众舆论在塑造信任和人工智能一致性、道德采用以及制定公平的政策框架方面发挥着关键作用。本研究调查了社会对人工智能接受程度的决定因素——期望、风险-收益权衡和价值评估。使用来自德国的具有全国代表性的样本(N = 1100),我们检查了人工智能的心理模型和潜在的偏见。参与者评估了71个与人工智能相关的场景,涉及自动驾驶、医疗、艺术、政治、战争和社会分歧等领域,评估了它们的预期可能性、感知风险、收益和总体价值。我们提出了排名评估与视觉映射说明风险-收益权衡。我们的研究结果表明,虽然许多情况被认为是可能的,但它们通常与高风险、有限的收益和低整体价值相关。回归分析显示,96.5% (r2=0.965)的价值判断方差由风险(β= - 0.490)和收益(β=+0.672)解释,与预期似然无显著关系。人口统计和人格特征,包括年龄、性别和人工智能准备程度,都会影响人们的看法,这凸显了有针对性的人工智能扫盲计划的必要性。这些发现为研究人员、开发人员和政策制定者提供了可操作的见解,强调了沟通实际利益和解决公众关切的必要性,以促进负责任和包容性的人工智能采用。未来的研究应该探索跨文化差异和公众认知的纵向变化,为全球人工智能治理提供信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
21.30
自引率
10.80%
发文量
813
期刊介绍: Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors. In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信