公共部门机器学习的整体系统安全:经验之谈

IF 7.8 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
J. Delfos (Jeroen), A.M.G. Zuiderwijk (Anneke), S. van Cranenburgh (Sander), C.G. Chorus (Caspar), R.I.J. Dobbe (Roel)
{"title":"公共部门机器学习的整体系统安全:经验之谈","authors":"J. Delfos (Jeroen),&nbsp;A.M.G. Zuiderwijk (Anneke),&nbsp;S. van Cranenburgh (Sander),&nbsp;C.G. Chorus (Caspar),&nbsp;R.I.J. Dobbe (Roel)","doi":"10.1016/j.giq.2024.101963","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 3","pages":"Article 101963"},"PeriodicalIF":7.8000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558/pdfft?md5=535820313d99de364eb4196e987f032a&pid=1-s2.0-S0740624X24000558-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Integral system safety for machine learning in the public sector: An empirical account\",\"authors\":\"J. Delfos (Jeroen),&nbsp;A.M.G. Zuiderwijk (Anneke),&nbsp;S. van Cranenburgh (Sander),&nbsp;C.G. Chorus (Caspar),&nbsp;R.I.J. Dobbe (Roel)\",\"doi\":\"10.1016/j.giq.2024.101963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.</p></div>\",\"PeriodicalId\":48258,\"journal\":{\"name\":\"Government Information Quarterly\",\"volume\":\"41 3\",\"pages\":\"Article 101963\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000558/pdfft?md5=535820313d99de364eb4196e987f032a&pid=1-s2.0-S0740624X24000558-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Government Information Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000558\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文将系统理论和系统安全概念引入到当前有关公共部门机器学习(ML)系统安全的学术讨论中。特别是,我们分析了影响此类系统控制能力的 ML 系统风险因素及其各自的制度背景。我们利用访谈数据,归纳出在公共专业人员的认知中存在哪些此类系统的风险因素,以及哪些因素是基于系统理论所预期但却缺失的。基于从系统理论的视角来看待人工乐虎国际手机版下载系统最合适的假设,我们认为,在解决人工乐虎国际手机版下载系统安全问题的持续努力中,缺失的因素值得更多关注。这些因素包括安全目标和约束条件的阐述、将系统因素纳入系统设计、安全控制结构的开发以及 ML 系统向高风险迁移的趋势。我们的观察结果支持这样的假设,即从系统理论的角度来看待多式联运系统是最合适的。因此,我们得出结论,系统安全概念可以为旨在提高多式联运系统安全的决策者提供有用的帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Integral system safety for machine learning in the public sector: An empirical account

This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Government Information Quarterly
Government Information Quarterly INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
15.70
自引率
16.70%
发文量
106
期刊介绍: Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信