Studying up: reorienting the study of algorithmic fairness around issues of power

Chelsea Barabas, Colin Doyle, JB Rubinovitz, Karthik Dinakar
{"title":"Studying up: reorienting the study of algorithmic fairness around issues of power","authors":"Chelsea Barabas, Colin Doyle, JB Rubinovitz, Karthik Dinakar","doi":"10.1145/3351095.3372859","DOIUrl":null,"url":null,"abstract":"Research within the social sciences and humanities has long characterized the work of data science as a sociotechnical process, comprised of a set of logics and techniques that are inseparable from specific social norms, expectations and contexts of development and use. Yet all too often the assumptions and premises underlying data analysis remain unexamined, even in contemporary debates about the fairness of algorithmic systems. This blindspot exists in part because the methodological toolkit used to evaluate the fairness of algorithmic systems remains limited to a narrow set of computational and legal modes of analysis. In this paper, we expand on Elish and Boyd's [17] call for data scientists to develop more robust frameworks for understanding their work as situated practice by examining a specific methodological debate within the field of anthropology, frequently referred to as the practice of \"studying up\". We reflect on the contributions that the call to \"study up\" has made in the field of anthropology before making the case that the field of algorithmic fairness would similarly benefit from a reorientation \"upward\". A case study from our own work illustrates what it looks like to reorient one's research questions \"up\" in a high-profile debate regarding the fairness of an algorithmic system - namely, pretrial risk assessment in American criminal law. We discuss the limitations of contemporary fairness discourse with regard to pretrial risk assessment before highlighting the insights gained when we reframe our research questions to focus on those who inhabit positions of power and authority within the U.S. court system. Finally, we reflect on the challenges we have encountered in implementing data science projects that \"study up\". In the process, we surface new insights and questions about what it means to ethically engage in data science work that directly confronts issues of power and authority.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"70","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3372859","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 70

Abstract

Research within the social sciences and humanities has long characterized the work of data science as a sociotechnical process, comprised of a set of logics and techniques that are inseparable from specific social norms, expectations and contexts of development and use. Yet all too often the assumptions and premises underlying data analysis remain unexamined, even in contemporary debates about the fairness of algorithmic systems. This blindspot exists in part because the methodological toolkit used to evaluate the fairness of algorithmic systems remains limited to a narrow set of computational and legal modes of analysis. In this paper, we expand on Elish and Boyd's [17] call for data scientists to develop more robust frameworks for understanding their work as situated practice by examining a specific methodological debate within the field of anthropology, frequently referred to as the practice of "studying up". We reflect on the contributions that the call to "study up" has made in the field of anthropology before making the case that the field of algorithmic fairness would similarly benefit from a reorientation "upward". A case study from our own work illustrates what it looks like to reorient one's research questions "up" in a high-profile debate regarding the fairness of an algorithmic system - namely, pretrial risk assessment in American criminal law. We discuss the limitations of contemporary fairness discourse with regard to pretrial risk assessment before highlighting the insights gained when we reframe our research questions to focus on those who inhabit positions of power and authority within the U.S. court system. Finally, we reflect on the challenges we have encountered in implementing data science projects that "study up". In the process, we surface new insights and questions about what it means to ethically engage in data science work that directly confronts issues of power and authority.
向上研究:围绕权力问题重新定位算法公平性的研究
长期以来,社会科学和人文科学的研究一直将数据科学的工作描述为一个社会技术过程,由一组逻辑和技术组成,这些逻辑和技术与特定的社会规范、期望以及发展和使用的背景密不可分。然而,数据分析背后的假设和前提往往没有得到检验,即使在关于算法系统公平性的当代辩论中也是如此。这种盲点的存在部分是因为用于评估算法系统公平性的方法论工具包仍然局限于一套狭窄的计算和法律分析模式。在本文中,我们扩展了Elish和Boyd的b[17]呼吁数据科学家通过研究人类学领域内的特定方法论辩论(通常被称为“向上学习”的实践),开发更强大的框架,以理解他们的工作作为情境实践。在提出算法公平领域同样会受益于“向上”的重新定位之前,我们反思了“向上学习”的呼吁在人类学领域所做出的贡献。我们自己工作中的一个案例研究表明,在一场关于算法系统公平性的备受瞩目的辩论中,一个人的研究问题“向上”是什么样子的——即美国刑法中的审前风险评估。我们讨论了关于审前风险评估的当代公平话语的局限性,然后强调了当我们重新构建我们的研究问题以关注那些在美国法院系统中占据权力和权威地位的人时所获得的见解。最后,我们反思了我们在实施“向上学习”的数据科学项目时遇到的挑战。在这个过程中,我们对直接面对权力和权威问题的数据科学工作的道德意义产生了新的见解和问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信