Diagnosing and Addressing Emergent Harms in the Design Process of Public AI and Algorithmic Systems

Sem Nouws, Íñigo Martinez De Rituerto De Troya, R. Dobbe, M. Janssen
{"title":"Diagnosing and Addressing Emergent Harms in the Design Process of Public AI and Algorithmic Systems","authors":"Sem Nouws, Íñigo Martinez De Rituerto De Troya, R. Dobbe, M. Janssen","doi":"10.1145/3598469.3598557","DOIUrl":null,"url":null,"abstract":"Algorithmic and data-driven systems are increasingly used in the public sector to improve the efficiency of existing services or to provide new services through the newfound capacity to process vast volumes of data. Unfortunately, certain instances also have negative consequences for citizens, in the form of discriminatory outcomes, arbitrary decisions, lack of recourse, and more. These have serious impacts on citizens ranging from material to psychological harms. These harms partly emerge from choices and interactions in the design process. Existing critical and reflective frameworks for technology design do not address several aspects that are important to the design of systems in the public sector, namely protection of citizens in the face of potential algorithmic harms, the design of institutions to ensure system safety, and an understanding of how power relations affect the design, development, and deployment of these systems. The goal of this workshop is to develop these three perspectives and take the next step towards reflective design processes within public organisations. The workshop will be divided into two parts. In the first half we will elaborate the conceptual foundations of these perspectives in a series of short talks. Workshop participants will learn new ways of protecting against algorithmic harms in sociotechnical systems through understanding what institutions can support system safety, and how power relations influence the design process. In the second half, participants will get a chance to apply these lenses by analysing a real world case, and reflect on the challenges in applying conceptual frameworks to practice.","PeriodicalId":401026,"journal":{"name":"Proceedings of the 24th Annual International Conference on Digital Government Research","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th Annual International Conference on Digital Government Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3598469.3598557","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Algorithmic and data-driven systems are increasingly used in the public sector to improve the efficiency of existing services or to provide new services through the newfound capacity to process vast volumes of data. Unfortunately, certain instances also have negative consequences for citizens, in the form of discriminatory outcomes, arbitrary decisions, lack of recourse, and more. These have serious impacts on citizens ranging from material to psychological harms. These harms partly emerge from choices and interactions in the design process. Existing critical and reflective frameworks for technology design do not address several aspects that are important to the design of systems in the public sector, namely protection of citizens in the face of potential algorithmic harms, the design of institutions to ensure system safety, and an understanding of how power relations affect the design, development, and deployment of these systems. The goal of this workshop is to develop these three perspectives and take the next step towards reflective design processes within public organisations. The workshop will be divided into two parts. In the first half we will elaborate the conceptual foundations of these perspectives in a series of short talks. Workshop participants will learn new ways of protecting against algorithmic harms in sociotechnical systems through understanding what institutions can support system safety, and how power relations influence the design process. In the second half, participants will get a chance to apply these lenses by analysing a real world case, and reflect on the challenges in applying conceptual frameworks to practice.
公共人工智能和算法系统设计过程中突发危害的诊断和处理
算法和数据驱动系统越来越多地用于公共部门,以提高现有服务的效率,或通过处理大量数据的新能力提供新服务。不幸的是,某些情况也会以歧视性结果、武断决定、缺乏追索权等形式对公民产生负面影响。这些对公民产生了严重的影响,从物质伤害到心理伤害。这些危害部分来自于设计过程中的选择和互动。现有的关键和反思性技术设计框架没有解决对公共部门系统设计很重要的几个方面,即面对潜在的算法危害保护公民,确保系统安全的制度设计,以及对权力关系如何影响这些系统的设计、开发和部署的理解。本次研讨会的目标是发展这三种观点,并在公共组织中采取反思性设计过程的下一步。工作坊将分为两个部分。在前半部分,我们将在一系列简短的演讲中阐述这些观点的概念基础。研讨会参与者将通过了解哪些机构可以支持系统安全,以及权力关系如何影响设计过程,学习防止社会技术系统中算法危害的新方法。在下半场,参与者将有机会通过分析一个现实世界的案例来应用这些镜头,并反思将概念框架应用于实践的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信