ACM Journal on Responsible Computing最新文献

筛选
英文 中文
Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making 儿童福利的算法危害:实践、组织和基层决策的不确定性
ACM Journal on Responsible Computing Pub Date : 2023-08-09 DOI: 10.1145/3616473
Devansh Saxena, Shion Guha
{"title":"Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making","authors":"Devansh Saxena, Shion Guha","doi":"10.1145/3616473","DOIUrl":"https://doi.org/10.1145/3616473","url":null,"abstract":"Algorithms in public services such as child welfare, criminal justice, and education are increasingly being used to make high-stakes decisions about human lives. Drawing upon findings from a two-year ethnography conducted at a child welfare agency, we highlight how algorithmic systems are embedded within a complex decision-making ecosystem at critical points of the child welfare process. Caseworkers interact with algorithms in their daily lives where they must collect information about families and feed it to algorithms to make critical decisions. We show how the interplay between systemic mechanics and algorithmic decision-making can adversely impact the fairness of the decision-making process itself. We show how functionality issues in algorithmic systems can lead to process-oriented harms where they adversely affect the nature of professional practice, and administration at the agency, and lead to inconsistent and unreliable decisions at the street level. In addition, caseworkers are compelled to undertake additional labor in the form of repair work to restore disrupted administrative processes and decision-making, all while facing organizational pressures and time and resource constraints. Finally, we share the case study of a simple algorithmic tool that centers caseworkers’ decision-making within a trauma-informed framework and leads to better outcomes, however, required a significant amount of investments on the agency’s part in creating the ecosystem for its proper use.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121548472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy 反对预测优化:论优化预测准确性的决策算法的合法性
ACM Journal on Responsible Computing Pub Date : 2023-06-12 DOI: 10.1145/3636509
Angelina Wang, Sayash Kapoor, Solon Barocas, Arvind Narayanan
{"title":"Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy","authors":"Angelina Wang, Sayash Kapoor, Solon Barocas, Arvind Narayanan","doi":"10.1145/3636509","DOIUrl":"https://doi.org/10.1145/3636509","url":null,"abstract":"We formalize predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. For example, pre-trial risk prediction algorithms such as COMPAS use ML to predict whether an individual will re-offend in the future. Our thesis is that predictive optimization raises a distinctive and serious set of normative concerns that cause it to fail on its own terms. To test this, we review 387 reports, articles, and web pages from academia, industry, non-profits, governments, and data science contests, and find many real-world examples of predictive optimization. We select eight particularly consequential examples as case studies. Simultaneously, we develop a set of normative and technical critiques that challenge the claims made by the developers of these applications—in particular, claims of increased accuracy, efficiency, and fairness. Our key finding is that these critiques apply to each of the applications, are not easily evaded by redesigning the systems, and thus challenge whether these applications should be deployed. We argue that the burden of evidence for justifying why the deployment of predictive optimization is not harmful should rest with the developers of the tools. Based on our analysis, we provide a rubric of critical questions that can be used to deliberate or contest specific predictive optimization applications.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Statements: From Technical Concept to Community Practice 数据报表:从技术概念到社区实践
ACM Journal on Responsible Computing Pub Date : 2023-05-08 DOI: 10.1145/3594737
Angelina McMillan-Major, Emily M. Bender, Batya Friedman
{"title":"Data Statements: From Technical Concept to Community Practice","authors":"Angelina McMillan-Major, Emily M. Bender, Batya Friedman","doi":"10.1145/3594737","DOIUrl":"https://doi.org/10.1145/3594737","url":null,"abstract":"Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing (NLP), we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with NLP practitioners in order to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126694579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信