算法偏差的家谱学方法

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Marta Ziosi, David Watson, Luciano Floridi
{"title":"算法偏差的家谱学方法","authors":"Marta Ziosi, David Watson, Luciano Floridi","doi":"10.1007/s11023-024-09672-2","DOIUrl":null,"url":null,"abstract":"<p>The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires <i>ex post</i> solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"16 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Genealogical Approach to Algorithmic Bias\",\"authors\":\"Marta Ziosi, David Watson, Luciano Floridi\",\"doi\":\"10.1007/s11023-024-09672-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires <i>ex post</i> solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.</p>\",\"PeriodicalId\":51133,\"journal\":{\"name\":\"Minds and Machines\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Minds and Machines\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11023-024-09672-2\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09672-2","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

公平、问责和透明(FAccT)文献倾向于将偏差作为一个需要事后解决方案(如公平度量)的问题来关注,而不是解决(再)产生偏差的潜在社会和技术条件。在本文中,我们提出了一种补充策略,将谱系学作为一种建设性的认识论批判,从促成算法偏差的条件来解释算法偏差。我们将重点放在 XAI 特征归因(夏普利值)和反事实方法上,将其作为衡量这些条件的潜在工具,并提供两个主要贡献。其一是建设性的:我们建立了一个理论框架,根据这些方法与作为社会差异证据的偏见的相关性对其进行分类。我们借鉴了珀尔的因果关系阶梯(《因果关系:模型、推理和推论》。剑桥大学出版社,剑桥,2000 年,《因果关系》,第二版。https://doi.org/10.1017/CBO9780511803161),对这些 XAI 方法回答公平相关问题和确定公平相关解决方案的能力进行排序。另一个重要贡献是:我们根据这些方法对受保护特征在歧视性结果中所起作用的假设,对其进行评估。为此,我们以科勒-豪斯曼(Northwest Univ Law Rev 113(5):1163-1227, 2019)的歧视建构主义理论为基础。我们得出了三项建议,供 XAI 从业人员开发和 AI 政策制定者规范工具,以解决算法偏见的条件,从而减少其未来的发生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Genealogical Approach to Algorithmic Bias

The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Minds and Machines
Minds and Machines 工程技术-计算机:人工智能
CiteScore
12.60
自引率
2.70%
发文量
30
审稿时长
>12 weeks
期刊介绍: Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science. Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios. By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信