{"title":"解决公共人工智能治理中的价值冲突:一个程序正义框架","authors":"Karl de Fine Licht","doi":"10.1016/j.giq.2025.102033","DOIUrl":null,"url":null,"abstract":"<div><div>This paper addresses the challenge of resolving value conflicts in the public governance of artificial intelligence (AI). While existing AI ethics and regulatory frameworks emphasize a range of normative criteria—such as accuracy, transparency, fairness, and accountability—many of these values are in tension and, in some cases, incommensurable. I propose a procedural justice framework that distinguishes between conflicts among derivative trustworthiness criteria and those involving fundamental democratic values. For the former, I apply analytical tools such as the Dominance Principle, Supervaluationism, and Maximality to eliminate clearly inferior alternatives. For the latter, I argue that justifiable decision-making requires procedurally fair deliberation grounded in widely endorsed principles such as publicity, inclusion, relevance, and appeal. I demonstrate the applicability of this framework through an indepth analysis of an AI-based decision support system used by the Swedish Public Employment Service (PES), showing how institutional decision-makers can navigate complex trade-offs between efficiency, explainability, and legality. The framework provides public institutions with a structured method for addressing normative conflicts in AI implementation, moving beyond technical optimization toward democratically legitimate governance.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 2","pages":"Article 102033"},"PeriodicalIF":7.8000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Resolving value conflicts in public AI governance: A procedural justice framework\",\"authors\":\"Karl de Fine Licht\",\"doi\":\"10.1016/j.giq.2025.102033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper addresses the challenge of resolving value conflicts in the public governance of artificial intelligence (AI). While existing AI ethics and regulatory frameworks emphasize a range of normative criteria—such as accuracy, transparency, fairness, and accountability—many of these values are in tension and, in some cases, incommensurable. I propose a procedural justice framework that distinguishes between conflicts among derivative trustworthiness criteria and those involving fundamental democratic values. For the former, I apply analytical tools such as the Dominance Principle, Supervaluationism, and Maximality to eliminate clearly inferior alternatives. For the latter, I argue that justifiable decision-making requires procedurally fair deliberation grounded in widely endorsed principles such as publicity, inclusion, relevance, and appeal. I demonstrate the applicability of this framework through an indepth analysis of an AI-based decision support system used by the Swedish Public Employment Service (PES), showing how institutional decision-makers can navigate complex trade-offs between efficiency, explainability, and legality. The framework provides public institutions with a structured method for addressing normative conflicts in AI implementation, moving beyond technical optimization toward democratically legitimate governance.</div></div>\",\"PeriodicalId\":48258,\"journal\":{\"name\":\"Government Information Quarterly\",\"volume\":\"42 2\",\"pages\":\"Article 102033\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2025-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Government Information Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0740624X25000279\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X25000279","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Resolving value conflicts in public AI governance: A procedural justice framework
This paper addresses the challenge of resolving value conflicts in the public governance of artificial intelligence (AI). While existing AI ethics and regulatory frameworks emphasize a range of normative criteria—such as accuracy, transparency, fairness, and accountability—many of these values are in tension and, in some cases, incommensurable. I propose a procedural justice framework that distinguishes between conflicts among derivative trustworthiness criteria and those involving fundamental democratic values. For the former, I apply analytical tools such as the Dominance Principle, Supervaluationism, and Maximality to eliminate clearly inferior alternatives. For the latter, I argue that justifiable decision-making requires procedurally fair deliberation grounded in widely endorsed principles such as publicity, inclusion, relevance, and appeal. I demonstrate the applicability of this framework through an indepth analysis of an AI-based decision support system used by the Swedish Public Employment Service (PES), showing how institutional decision-makers can navigate complex trade-offs between efficiency, explainability, and legality. The framework provides public institutions with a structured method for addressing normative conflicts in AI implementation, moving beyond technical optimization toward democratically legitimate governance.
期刊介绍:
Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.