”It’s Everybody’s Role to Speak Up... But Not Everyone Will”: Understanding AI Professionals’ Perceptions of Accountability for AI Bias Mitigation

Caitlin M. Lancaster, Kelsea Schulenberg, Christopher Flathmann, Nathan J. McNeese, Guo Freeman
{"title":"”It’s Everybody’s Role to Speak Up... But Not Everyone Will”: Understanding AI Professionals’ Perceptions of Accountability for AI Bias Mitigation","authors":"Caitlin M. Lancaster, Kelsea Schulenberg, Christopher Flathmann, Nathan J. McNeese, Guo Freeman","doi":"10.1145/3632121","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate the perceptions of AI professionals for their accountability for mitigating AI bias. Our work is motivated by calls for socially responsible AI development and governance in the face of societal harm but a lack of accountability across the entire socio-technical system. In particular, we explore a gap in the field stemming from the lack of empirical data needed to conclude how real AI professionals view bias mitigation and why individual AI professionals may be prevented from taking accountability even if they have the technical ability to do so. This gap is concerning as larger responsible AI efforts inherently rely on individuals who contribute to designing, developing, and deploying AI technologies and mitigation solutions. Through semi-structured interviews with AI professionals from diverse roles, organizations, and industries working on development projects, we identify that AI professionals are hindered from mitigating AI bias due to challenges that arise from two key areas: (1) their own technical and connotative understanding of AI bias and (2) internal and external organizational factors that inhibit these individuals. In exploring these factors, we reject previous claims that technical aptitude alone prevents accountability for AI bias. Instead, we point to interpersonal and intra-organizational issues that limit agency, empowerment, and overall participation in responsible computing efforts. Furthermore, to support practical approaches to responsible AI, we propose several high-level principled guidelines that will support the understanding, culpability, and mitigation of AI bias and its harm guided by both socio-technical systems and moral disengagement theories.","PeriodicalId":329595,"journal":{"name":"ACM Journal on Responsible Computing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Responsible Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3632121","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we investigate the perceptions of AI professionals for their accountability for mitigating AI bias. Our work is motivated by calls for socially responsible AI development and governance in the face of societal harm but a lack of accountability across the entire socio-technical system. In particular, we explore a gap in the field stemming from the lack of empirical data needed to conclude how real AI professionals view bias mitigation and why individual AI professionals may be prevented from taking accountability even if they have the technical ability to do so. This gap is concerning as larger responsible AI efforts inherently rely on individuals who contribute to designing, developing, and deploying AI technologies and mitigation solutions. Through semi-structured interviews with AI professionals from diverse roles, organizations, and industries working on development projects, we identify that AI professionals are hindered from mitigating AI bias due to challenges that arise from two key areas: (1) their own technical and connotative understanding of AI bias and (2) internal and external organizational factors that inhibit these individuals. In exploring these factors, we reject previous claims that technical aptitude alone prevents accountability for AI bias. Instead, we point to interpersonal and intra-organizational issues that limit agency, empowerment, and overall participation in responsible computing efforts. Furthermore, to support practical approaches to responsible AI, we propose several high-level principled guidelines that will support the understanding, culpability, and mitigation of AI bias and its harm guided by both socio-technical systems and moral disengagement theories.
“说出来是每个人的责任……但不是每个人都会”:理解人工智能专业人士对减轻人工智能偏见的责任的看法
在本文中,我们调查了人工智能专业人员对减轻人工智能偏见的责任的看法。面对社会危害,我们呼吁对社会负责的人工智能开发和治理,但整个社会技术系统缺乏问责制,这是我们工作的动力。特别是,我们探讨了由于缺乏必要的经验数据而导致的该领域的差距,这些数据无法得出结论,即真正的人工智能专业人员如何看待减轻偏见,以及为什么即使人工智能专业人员有技术能力,他们也可能被阻止承担责任。这一差距令人担忧,因为更大的负责任的人工智能工作本质上依赖于为设计、开发和部署人工智能技术和缓解解决方案做出贡献的个人。通过对从事开发项目的不同角色、组织和行业的人工智能专业人士进行半结构化访谈,我们发现,由于两个关键领域出现的挑战,人工智能专业人士阻碍了减轻人工智能偏见:(1)他们自己对人工智能偏见的技术和内涵理解;(2)内部和外部组织因素抑制了这些人。在探索这些因素时,我们拒绝先前的说法,即仅凭技术能力就可以防止对人工智能偏见负责。相反,我们指出了人际关系和组织内部的问题,这些问题限制了代理、授权和全面参与负责任的计算工作。此外,为了支持负责任的人工智能的实际方法,我们提出了一些高水平的原则指导方针,这些指导方针将支持社会技术系统和道德脱离理论指导下对人工智能偏见及其危害的理解、罪责和缓解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信