{"title":"Artificial intelligence vs. public administrators: Public trust, efficiency, and tolerance for errors","authors":"Haixu Bao , Wenfei Liu , Zheng Dai","doi":"10.1016/j.techfore.2025.124102","DOIUrl":null,"url":null,"abstract":"<div><div>This study develops and empirically examines an integrative systems framework of public trust in artificial intelligence (AI) in the public sector, grounded in Luhmann's theory of systemic trust. Through a methodologically rigorous survey experiment, we manipulated administrator/AI capabilities across computational-audit and conversational-advisory scenarios to investigate context-dependent trust dynamics. Findings reveal significant variation in public trust across usage contexts, with respondents demonstrating higher trust in AI for computational tasks, while preferring human administrators for conversational settings. Notably, our results challenge conventional assumptions about AI trust fragility, as evidence of AI mistakes did not engender disproportionate distrust relative to naturally imperfect humans. The study further demonstrates that improved efficiency can mitigate context-specific distrust stemming from AI errors in computational scenarios, though this effect was not observed in conversational contexts. By elucidating these nuanced, context-dependent dynamics of public trust towards algorithmic governance, this research contributes to both theoretical understanding and practical implementation. It provides policymakers with targeted, evidence-based guidance for cultivating appropriate trust when embedding AI technologies in the public sector through context-sensitive design approaches and governance practices. Future research should explore contingent formulations of public trust and its evolution across AI system lifecycles within diverse cultural and institutional environments.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"215 ","pages":"Article 124102"},"PeriodicalIF":12.9000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0040162525001337","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0
Abstract
This study develops and empirically examines an integrative systems framework of public trust in artificial intelligence (AI) in the public sector, grounded in Luhmann's theory of systemic trust. Through a methodologically rigorous survey experiment, we manipulated administrator/AI capabilities across computational-audit and conversational-advisory scenarios to investigate context-dependent trust dynamics. Findings reveal significant variation in public trust across usage contexts, with respondents demonstrating higher trust in AI for computational tasks, while preferring human administrators for conversational settings. Notably, our results challenge conventional assumptions about AI trust fragility, as evidence of AI mistakes did not engender disproportionate distrust relative to naturally imperfect humans. The study further demonstrates that improved efficiency can mitigate context-specific distrust stemming from AI errors in computational scenarios, though this effect was not observed in conversational contexts. By elucidating these nuanced, context-dependent dynamics of public trust towards algorithmic governance, this research contributes to both theoretical understanding and practical implementation. It provides policymakers with targeted, evidence-based guidance for cultivating appropriate trust when embedding AI technologies in the public sector through context-sensitive design approaches and governance practices. Future research should explore contingent formulations of public trust and its evolution across AI system lifecycles within diverse cultural and institutional environments.
期刊介绍:
Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors.
In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.