Morgan Vigil-Hayes, Lakshmi Panguluri, Harry DeCecco, Md Nazmul Hossain, Ann D Collier, Darold Joseph, Ashish Amresh
{"title":"ICT-facilitated Health Interventions for Indigenous Communities: A Critical Literature Review","authors":"Morgan Vigil-Hayes, Lakshmi Panguluri, Harry DeCecco, Md Nazmul Hossain, Ann D Collier, Darold Joseph, Ashish Amresh","doi":"10.1145/3687133","DOIUrl":"https://doi.org/10.1145/3687133","url":null,"abstract":"Despite significant cultural strengths and knowledge, Indigenous people around the world experience substantial health inequities due to the historic and ongoing impacts of settler colonialism. As information and communication technologies (ICTs) are increasingly used as part of health interventions to help bridge equity gaps, it is important to characterize and critically evaluate how ICT-facilitated health interventions are designed for and used by Indigenous people. This critical literature review queried articles from three archives focused on health and technology with the goal of identifying cross-cutting challenges and opportunities for ICT-facilitated health interventions in Indigenous communities. Importantly, we use the lens of decolonization to understand important issues that impact Indigenous sovereignty, including the incorporation of Indigenous Knowledge and engagement with data sovereignty.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"26 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141920462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beatrice Vincenzi, Simone Stumpf, Alex S. Taylor, Yuri Nakao
{"title":"Lay User Involvement in Developing Human-Centric Responsible AI Systems: When and How?","authors":"Beatrice Vincenzi, Simone Stumpf, Alex S. Taylor, Yuri Nakao","doi":"10.1145/3652592","DOIUrl":"https://doi.org/10.1145/3652592","url":null,"abstract":"Artificial Intelligence (AI) is increasingly used in mainstream applications to make decisions that affect a large number of people. While research has focused on involving machine learning and domain experts during the development of responsible AI systems, the input of lay users has too often been ignored. By exploring the involvement of lay users, our work seeks to advance human-centric responsible AI development processes. To reflect on lay users’ views, we conducted an online survey of 1121 people in the United Kingdom. We found that respondents had concerns about fairness and transparency of AI systems which requires more education around AI to underpin lay user involvement. They saw a need for having their views reflected at all stages of the AI development lifecycle. Lay users mainly charged internal stakeholders to oversee the development process but supported by an ethics committee and input from an external regulatory body. We also probed for possible techniques for involving lay users more directly. Our work has implications for creating processes that ensure the development of responsible AI systems that take lay user perspectives into account.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"20 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly Widdicks, Bran Knowles, A. Friday, Gordon S. Blair
{"title":"ICT Under Constraint: Exposing Tensions in Collaboratively Prioritising ICT Innovation for Climate Targets","authors":"Kelly Widdicks, Bran Knowles, A. Friday, Gordon S. Blair","doi":"10.1145/3648234","DOIUrl":"https://doi.org/10.1145/3648234","url":null,"abstract":"The international treaty known as the Paris Agreement requires global greenhouse gas emissions to decrease at a pace that will limit global warming to 1.5 degrees Celsius. Given the pressure on all sectors to reduce their emissions to meet this target, the ICT sector must begin to explore how to innovate under constraint for the first time. This could mean facing the unprecedented dilemma of having to choose between innovations, in which case the community will need to develop processes for making collective decisions regarding which innovations are most deserving of their carbon costs. In this paper, we expose tensions in collaboratively prioritising ICT innovation under constraints, and discuss the considerations and approaches the ICT sector may require to make such decisions effectively across the sector. This opens up a new area of research where we envision HCI expertise can inform and resolve such tensions for values-based and target-led ICT innovation towards a sustainable future.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"1 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Courtney M. Heldreth, Ellis P. Monk, Alan T. Clark, Susanna Ricco, Candice Schumann, Xango Eyee
{"title":"Which Skin Tone Measures are the Most Inclusive? An Investigation of Skin Tone Measures for Artificial Intelligence.","authors":"Courtney M. Heldreth, Ellis P. Monk, Alan T. Clark, Susanna Ricco, Candice Schumann, Xango Eyee","doi":"10.1145/3632120","DOIUrl":"https://doi.org/10.1145/3632120","url":null,"abstract":"Skin tone plays a critical role in artificial intelligence (AI). However, many algorithms have exhibited unfair bias against people with darker skin tones. One reason this occurs is a poor understanding of how well the scales we use to measure and account for skin tone in AI actually represent the variation of skin tones in people affected by these systems. To address this, we conducted a survey with 2,214 people in the United States to compare three skin tone scales: The Fitzpatrick 6-point scale, Rihanna's Fenty™ Beauty 40-point skin tone palette, and a newly developed Monk 10-point scale from the social sciences. We find that the Fitzpatrick scale is perceived to be less inclusive than the Fenty and Monk skin tone scales, and this was especially true for people from historically marginalized communities (i.e., people with darker skin tones, BIPOCs, and women). We also find no statistically meaningful differences in perceived representation across the Monk skin tone scale and the Fenty Beauty palette. We discuss the ways in which our findings can advance the understanding of skin tone in both the social science and machine learning communities.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":" 30","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135242773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring and Mitigating Group Inequalities In Resource Allocation","authors":"Arya Farahi, Angela Ting, Yingchen Ma","doi":"10.1145/3632122","DOIUrl":"https://doi.org/10.1145/3632122","url":null,"abstract":"Resource allocation, an integral part of socio-economic governance, profoundly influences individual prosperity and has the potential to mitigate or exacerbate socioeconomic disparities. This paper addresses the challenge of equitably allocating finite resources among individuals by answering two fundamental questions: (1) how to accurately measure and test group disparities and (2) how to optimally distribute resources while ensuring group fairness. We propose the Group Beneficiary Disparity (GBD) metric – an evaluation tool engineered to systematically gauge inequalities in a binary beneficiary/non-beneficiary context. The GBD provides decision-makers and planners with a powerful tool to audit social programs and optimize policies from a lens of group equality. We argue that utilitarian decision-makers cannot fully eliminate group disparities even when operating under social welfare constraints. To address this issue, we propose a new resource allocation optimization model, called A-FARM (Asymptotically Fair Allocation of Resources Model), with asymptotic group fairness guarantees. A-FARM partitions individuals into distinct, non-overlapping units and distributes resources among these units based on a utility-based allocation mechanism. Finally, we evaluate the performance of our proposed algorithm using both simulated and real-world data. Our results demonstrate that, A-FARM enables decision-makers to (1) achieve maximume efficiency under group fairness constrain and (2) perform a fairness-efficiency trade-off.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"30 43","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135390446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caitlin M. Lancaster, Kelsea Schulenberg, Christopher Flathmann, Nathan J. McNeese, Guo Freeman
{"title":"”It’s Everybody’s Role to Speak Up... But Not Everyone Will”: Understanding AI Professionals’ Perceptions of Accountability for AI Bias Mitigation","authors":"Caitlin M. Lancaster, Kelsea Schulenberg, Christopher Flathmann, Nathan J. McNeese, Guo Freeman","doi":"10.1145/3632121","DOIUrl":"https://doi.org/10.1145/3632121","url":null,"abstract":"In this paper, we investigate the perceptions of AI professionals for their accountability for mitigating AI bias. Our work is motivated by calls for socially responsible AI development and governance in the face of societal harm but a lack of accountability across the entire socio-technical system. In particular, we explore a gap in the field stemming from the lack of empirical data needed to conclude how real AI professionals view bias mitigation and why individual AI professionals may be prevented from taking accountability even if they have the technical ability to do so. This gap is concerning as larger responsible AI efforts inherently rely on individuals who contribute to designing, developing, and deploying AI technologies and mitigation solutions. Through semi-structured interviews with AI professionals from diverse roles, organizations, and industries working on development projects, we identify that AI professionals are hindered from mitigating AI bias due to challenges that arise from two key areas: (1) their own technical and connotative understanding of AI bias and (2) internal and external organizational factors that inhibit these individuals. In exploring these factors, we reject previous claims that technical aptitude alone prevents accountability for AI bias. Instead, we point to interpersonal and intra-organizational issues that limit agency, empowerment, and overall participation in responsible computing efforts. Furthermore, to support practical approaches to responsible AI, we propose several high-level principled guidelines that will support the understanding, culpability, and mitigation of AI bias and its harm guided by both socio-technical systems and moral disengagement theories.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135479887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Hort, Zhenpeng Chen, Jie M. Zhang, Mark Harman, Federica Sarro
{"title":"Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey","authors":"Max Hort, Zhenpeng Chen, Jie M. Zhang, Mark Harman, Federica Sarro","doi":"10.1145/3631326","DOIUrl":"https://doi.org/10.1145/3631326","url":null,"abstract":"This paper provides a comprehensive survey of bias mitigation methods for achieving fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning bias mitigation for ML classifiers. These methods can be distinguished based on their intervention procedure (i.e., pre-processing, in-processing, post-processing) and the technique they apply. We investigate how existing bias mitigation methods are evaluated in the literature. In particular, we consider datasets, metrics and benchmarking. Based on the gathered insights (e.g., What is the most popular fairness metric? How many datasets are used for evaluating bias mitigation methods?), we hope to support practitioners in making informed choices when developing and evaluating new bias mitigation methods.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"164 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design","authors":"Jonathan Zong, J. Nathan Matias","doi":"10.1145/3630107","DOIUrl":"https://doi.org/10.1145/3630107","url":null,"abstract":"Amidst calls for public accountability over large data-driven systems, feminist and indigenous scholars have developed refusal as a practice that challenges the authority of data collectors. However, because data affects so many aspects of daily life, it can be hard to see seemingly different refusal strategies as part of the same repertoire. Furthermore, conversations about refusal often happen from the standpoint of designers and policymakers rather than the people and communities most affected by data collection. In this paper, we introduce a framework for data refusal from below —writing from the standpoint of people who refuse, rather than the institutions that seek their compliance. Because refusers work to reshape socio-technical systems, we argue that refusal is an act of design, and that design-based frameworks and methods can contribute to refusal. We characterize refusal strategies across four constituent facets common to all refusal, whatever strategies are used: autonomy , or how refusal accounts for individual and collective interests; time , or whether refusal reacts to past harm or proactively prevents future harm; power , or the extent to which refusal makes change possible; and cost , or whether or not refusal can reduce or redistribute penalties experienced by refusers. We illustrate each facet by drawing on cases of people and collectives that have refused data systems. Together, the four facets of our framework are designed to help scholars and activists describe, evaluate, and imagine new forms of refusal.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134973215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tragedy of the Commons in Crowd Work-Based Research","authors":"Huichuan Xia","doi":"10.1145/3626493","DOIUrl":"https://doi.org/10.1145/3626493","url":null,"abstract":"Academic scholars have leveraged crowd work platforms such as MTurk to conduct research and collect data, but the data quality crisis in crowd work has been an alarming phenomenon recently. Though prior studies have discussed data quality and validity issues in crowd work via surveys and experiments, they kind of neglected to explore the scholars’ and particularly the IRB's ethical concerns and the related policies in various ethical guidelines for crowd work-based research in these respects. In this study, we interviewed 17 scholars from six disciplines and 15 IRB directors and analysts in the U.S. and analyzed 28 research guidance documents to fill these gaps. We identified common themes among our interviewees and documents but also discovered distinctive and even opposing views regarding the approval rate, rejection, and internal/external research validity. Based on the findings, we discussed a potential Tragedy of the Commons regarding data quality deterioration and the disciplinary differences regarding validity in crowd work-based research. We further explored the origin of the data quality and validity issues in crowd work-based research. We advocated the IRB's ethical concerns in crowd work-based research be heard and respected further and be reflected in the ethical guidance for crowd work-based research. Finally, we proposed our research implications, limits, and future work.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Between Privacy and Utility: On Differential Privacy in Theory and Practice","authors":"Jeremy Seeman, Daniel Susser","doi":"10.1145/3626494","DOIUrl":"https://doi.org/10.1145/3626494","url":null,"abstract":"Differential privacy (DP) aims to confer data processing systems with inherent privacy guarantees, offering strong protections for personal data. But DP’s approach to privacy carries with it certain assumptions about how mathematical abstractions will be translated into real-world systems, which—if left unexamined, and unrealized in practice—could function to shield data collectors from liability and criticism, rather than substantively protect data subjects from privacy harms. This paper investigates these assumptions and discusses their implications for using DP to govern data-driven systems. In Parts 1 and 2, we introduce DP as, on one hand, a mathematical framework and, on the other hand, a kind of real-world sociotechnical system, using a hypothetical case study to illustrate how the two can diverge. In Parts 3 and 4, we discuss the way DP frames privacy loss, data processing interventions, and data subject participation, arguing it could exacerbate existing problems in privacy regulation. In part 5, we conclude with a discussion of DP’s potential interactions with the endogeneity of privacy law, and we propose principles for best governing DP systems. In making such assumptions and their consequences explicit, we hope to help DP succeed at realizing its promise for better substantive privacy protections.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}