公共部门对可信赖的人工智能自动决策的要求:系统回顾

IF 12.9 1区 管理学 Q1 BUSINESS
Olusegun Agbabiaka , Adegboyega Ojo , Niall Connolly
{"title":"公共部门对可信赖的人工智能自动决策的要求:系统回顾","authors":"Olusegun Agbabiaka ,&nbsp;Adegboyega Ojo ,&nbsp;Niall Connolly","doi":"10.1016/j.techfore.2025.124076","DOIUrl":null,"url":null,"abstract":"<div><div>With AI adoption for decision-making in the public sector projected to rise with profound socio-ethical impacts, the need to ensure its trustworthy use continues to attract research attention. We analyze the existing body of evidence and establish trustworthiness requirements for AI-enabled automated decision-making (ADM) in the public sector, identifying eighteen aggregate facets. We link these facets to dimensions of trust in automation and institution-based trust to develop a theory-oriented research framework. We further map them to the OECD AI system lifecycle, creating a practice-focused framework. Our study has theoretical, practical and policy implications. First, we extend the theory on technological trust. We also contribute to trustworthy AI literature, shedding light on relatively well-known requirements like accountability and transparency and revealing novel ones like context sensitivity, feedback and policy learning. Second, we provide a roadmap for public managers and developers to improve ADM governance practices along the AI lifecycle. Third, we offer policymakers a basis for evaluating possible gaps in current AI policies. Overall, our findings present opportunities for further research and offer some guidance on how to navigate the multi-dimensional challenges of designing, developing and implementing ADM for improved trustworthiness and greater public trust.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"215 ","pages":"Article 124076"},"PeriodicalIF":12.9000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Requirements for trustworthy AI-enabled automated decision-making in the public sector: A systematic review\",\"authors\":\"Olusegun Agbabiaka ,&nbsp;Adegboyega Ojo ,&nbsp;Niall Connolly\",\"doi\":\"10.1016/j.techfore.2025.124076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With AI adoption for decision-making in the public sector projected to rise with profound socio-ethical impacts, the need to ensure its trustworthy use continues to attract research attention. We analyze the existing body of evidence and establish trustworthiness requirements for AI-enabled automated decision-making (ADM) in the public sector, identifying eighteen aggregate facets. We link these facets to dimensions of trust in automation and institution-based trust to develop a theory-oriented research framework. We further map them to the OECD AI system lifecycle, creating a practice-focused framework. Our study has theoretical, practical and policy implications. First, we extend the theory on technological trust. We also contribute to trustworthy AI literature, shedding light on relatively well-known requirements like accountability and transparency and revealing novel ones like context sensitivity, feedback and policy learning. Second, we provide a roadmap for public managers and developers to improve ADM governance practices along the AI lifecycle. Third, we offer policymakers a basis for evaluating possible gaps in current AI policies. Overall, our findings present opportunities for further research and offer some guidance on how to navigate the multi-dimensional challenges of designing, developing and implementing ADM for improved trustworthiness and greater public trust.</div></div>\",\"PeriodicalId\":48454,\"journal\":{\"name\":\"Technological Forecasting and Social Change\",\"volume\":\"215 \",\"pages\":\"Article 124076\"},\"PeriodicalIF\":12.9000,\"publicationDate\":\"2025-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technological Forecasting and Social Change\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0040162525001076\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0040162525001076","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能在公共部门决策中的应用预计将增加,并产生深远的社会伦理影响,确保其可靠使用的必要性继续吸引研究的关注。我们分析了现有的证据,并建立了公共部门人工智能自动决策(ADM)的可信度要求,确定了18个总体方面。我们将这些方面与自动化中的信任维度和基于制度的信任联系起来,以建立一个以理论为导向的研究框架。我们进一步将它们映射到经合组织人工智能系统生命周期,创建一个以实践为中心的框架。我们的研究具有理论、实践和政策意义。首先,我们对技术信任理论进行了拓展。我们还为值得信赖的人工智能文献做出了贡献,阐明了相对知名的要求,如问责制和透明度,并揭示了新的要求,如上下文敏感性、反馈和政策学习。其次,我们为公共管理人员和开发人员提供了一个路线图,以改进AI生命周期中的ADM治理实践。第三,我们为政策制定者提供了评估当前人工智能政策中可能存在差距的基础。总的来说,我们的发现为进一步研究提供了机会,并为如何应对设计、开发和实施ADM的多维挑战提供了一些指导,以提高可信度和提高公众信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Requirements for trustworthy AI-enabled automated decision-making in the public sector: A systematic review

Requirements for trustworthy AI-enabled automated decision-making in the public sector: A systematic review
With AI adoption for decision-making in the public sector projected to rise with profound socio-ethical impacts, the need to ensure its trustworthy use continues to attract research attention. We analyze the existing body of evidence and establish trustworthiness requirements for AI-enabled automated decision-making (ADM) in the public sector, identifying eighteen aggregate facets. We link these facets to dimensions of trust in automation and institution-based trust to develop a theory-oriented research framework. We further map them to the OECD AI system lifecycle, creating a practice-focused framework. Our study has theoretical, practical and policy implications. First, we extend the theory on technological trust. We also contribute to trustworthy AI literature, shedding light on relatively well-known requirements like accountability and transparency and revealing novel ones like context sensitivity, feedback and policy learning. Second, we provide a roadmap for public managers and developers to improve ADM governance practices along the AI lifecycle. Third, we offer policymakers a basis for evaluating possible gaps in current AI policies. Overall, our findings present opportunities for further research and offer some guidance on how to navigate the multi-dimensional challenges of designing, developing and implementing ADM for improved trustworthiness and greater public trust.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
21.30
自引率
10.80%
发文量
813
期刊介绍: Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors. In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信