Artificial intelligence vs. public administrators: Public trust, efficiency, and tolerance for errors

IF 12.9 1区 管理学 Q1 BUSINESS
Haixu Bao , Wenfei Liu , Zheng Dai
{"title":"Artificial intelligence vs. public administrators: Public trust, efficiency, and tolerance for errors","authors":"Haixu Bao ,&nbsp;Wenfei Liu ,&nbsp;Zheng Dai","doi":"10.1016/j.techfore.2025.124102","DOIUrl":null,"url":null,"abstract":"<div><div>This study develops and empirically examines an integrative systems framework of public trust in artificial intelligence (AI) in the public sector, grounded in Luhmann's theory of systemic trust. Through a methodologically rigorous survey experiment, we manipulated administrator/AI capabilities across computational-audit and conversational-advisory scenarios to investigate context-dependent trust dynamics. Findings reveal significant variation in public trust across usage contexts, with respondents demonstrating higher trust in AI for computational tasks, while preferring human administrators for conversational settings. Notably, our results challenge conventional assumptions about AI trust fragility, as evidence of AI mistakes did not engender disproportionate distrust relative to naturally imperfect humans. The study further demonstrates that improved efficiency can mitigate context-specific distrust stemming from AI errors in computational scenarios, though this effect was not observed in conversational contexts. By elucidating these nuanced, context-dependent dynamics of public trust towards algorithmic governance, this research contributes to both theoretical understanding and practical implementation. It provides policymakers with targeted, evidence-based guidance for cultivating appropriate trust when embedding AI technologies in the public sector through context-sensitive design approaches and governance practices. Future research should explore contingent formulations of public trust and its evolution across AI system lifecycles within diverse cultural and institutional environments.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"215 ","pages":"Article 124102"},"PeriodicalIF":12.9000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0040162525001337","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

This study develops and empirically examines an integrative systems framework of public trust in artificial intelligence (AI) in the public sector, grounded in Luhmann's theory of systemic trust. Through a methodologically rigorous survey experiment, we manipulated administrator/AI capabilities across computational-audit and conversational-advisory scenarios to investigate context-dependent trust dynamics. Findings reveal significant variation in public trust across usage contexts, with respondents demonstrating higher trust in AI for computational tasks, while preferring human administrators for conversational settings. Notably, our results challenge conventional assumptions about AI trust fragility, as evidence of AI mistakes did not engender disproportionate distrust relative to naturally imperfect humans. The study further demonstrates that improved efficiency can mitigate context-specific distrust stemming from AI errors in computational scenarios, though this effect was not observed in conversational contexts. By elucidating these nuanced, context-dependent dynamics of public trust towards algorithmic governance, this research contributes to both theoretical understanding and practical implementation. It provides policymakers with targeted, evidence-based guidance for cultivating appropriate trust when embedding AI technologies in the public sector through context-sensitive design approaches and governance practices. Future research should explore contingent formulations of public trust and its evolution across AI system lifecycles within diverse cultural and institutional environments.
人工智能与公共管理者:公众信任、效率和对错误的容忍度
本研究以Luhmann的系统性信任理论为基础,开发并实证检验了公共部门人工智能(AI)公众信任的综合系统框架。通过一项方法严谨的调查实验,我们在计算审计和对话咨询场景中操纵管理员/人工智能能力,以调查依赖于上下文的信任动态。调查结果显示,在不同的使用背景下,公众的信任度存在显著差异,受访者对人工智能的计算任务表现出更高的信任,而在会话设置中更喜欢人类管理员。值得注意的是,我们的研究结果挑战了关于人工智能信任脆弱性的传统假设,因为相对于天生不完美的人类,人工智能错误的证据并没有产生不成比例的不信任。该研究进一步表明,提高效率可以减轻计算场景中由人工智能错误引起的特定于上下文的不信任,尽管在会话环境中没有观察到这种影响。通过阐明公众信任对算法治理的这些微妙的、情境依赖的动态,本研究有助于理论理解和实践实施。它为政策制定者提供了有针对性的、以证据为基础的指导,以便通过上下文敏感的设计方法和治理实践,在将人工智能技术嵌入公共部门时,培养适当的信任。未来的研究应该探索公共信任的偶然形式及其在不同文化和制度环境下在人工智能系统生命周期中的演变。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
21.30
自引率
10.80%
发文量
813
期刊介绍: Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors. In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信