Algorithmic Discrimination in Public Service Provision: Understanding Citizens’ Attribution of Responsibility for Human vs Algorithmic Discriminatory Outcomes
Saar Alon-Barkat, Madalina Busuioc, Kayla Schwoerer, Kristina S Weißmüller
{"title":"Algorithmic Discrimination in Public Service Provision: Understanding Citizens’ Attribution of Responsibility for Human vs Algorithmic Discriminatory Outcomes","authors":"Saar Alon-Barkat, Madalina Busuioc, Kayla Schwoerer, Kristina S Weißmüller","doi":"10.1093/jopart/muaf024","DOIUrl":null,"url":null,"abstract":"As public bodies increasingly adopt AI technologies in their work, there is simultaneously growing attention to the risk that the reliance on the technology may introduce biases and produce discriminatory administrative outcomes, as demonstrated by multiple real-world cases. Our contribution addresses a core theoretical puzzle: With AI algorithms being increasingly embedded across public services, we lack crucial knowledge about how citizens assign responsibility to public organizations for algorithmic failures and discrimination in public services compared to human discrimination. This speaks to key questions as to whether organizational responsibility attribution mechanisms and public demand for consequences fundamentally change in the context of algorithmic governance. Addressing this gap, we examine whether individual citizens are less likely to attribute responsibility for algorithmic vs human discrimination in public service provision. Building on psychology literature, we further theorize potential mechanisms that underlie these effects and shape citizens’ responses. We investigate these research questions through a pre-registered survey experiment conducted in the Netherlands (N=2,483). Our findings indicate that public organizations are not held to a lower responsibility standard for algorithmic compared to human discrimination. Technological delegation to AI does not allow public bodies to bypass responsibility for discriminatory outcomes. However, we find that citizens assign more responsibility for algorithmic discrimination when the algorithm is developed inhouse rather than externally. This could lead to the emergence of accountability deficits pertaining to technological outsourcing.","PeriodicalId":48366,"journal":{"name":"Journal of Public Administration Research and Theory","volume":"148 1","pages":""},"PeriodicalIF":6.3000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Public Administration Research and Theory","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/jopart/muaf024","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"POLITICAL SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As public bodies increasingly adopt AI technologies in their work, there is simultaneously growing attention to the risk that the reliance on the technology may introduce biases and produce discriminatory administrative outcomes, as demonstrated by multiple real-world cases. Our contribution addresses a core theoretical puzzle: With AI algorithms being increasingly embedded across public services, we lack crucial knowledge about how citizens assign responsibility to public organizations for algorithmic failures and discrimination in public services compared to human discrimination. This speaks to key questions as to whether organizational responsibility attribution mechanisms and public demand for consequences fundamentally change in the context of algorithmic governance. Addressing this gap, we examine whether individual citizens are less likely to attribute responsibility for algorithmic vs human discrimination in public service provision. Building on psychology literature, we further theorize potential mechanisms that underlie these effects and shape citizens’ responses. We investigate these research questions through a pre-registered survey experiment conducted in the Netherlands (N=2,483). Our findings indicate that public organizations are not held to a lower responsibility standard for algorithmic compared to human discrimination. Technological delegation to AI does not allow public bodies to bypass responsibility for discriminatory outcomes. However, we find that citizens assign more responsibility for algorithmic discrimination when the algorithm is developed inhouse rather than externally. This could lead to the emergence of accountability deficits pertaining to technological outsourcing.
期刊介绍:
The Journal of Public Administration Research and Theory serves as a bridge between public administration or public management scholarship and public policy studies. The Journal aims to provide in-depth analysis of developments in the organizational, administrative, and policy sciences as they apply to government and governance. Each issue brings you critical perspectives and cogent analyses, serving as an outlet for the best theoretical and research work in the field. The Journal of Public Administration Research and Theory is the official journal of the Public Management Research Association.