Centering disability perspectives in algorithmic fairness, accountability, & transparency

Alexandra Reeve Givens, M. Morris
{"title":"Centering disability perspectives in algorithmic fairness, accountability, & transparency","authors":"Alexandra Reeve Givens, M. Morris","doi":"10.1145/3351095.3375686","DOIUrl":null,"url":null,"abstract":"It is vital to consider the unique risks and impacts of algorithmic decision-making for people with disabilities. The diverse nature of potential disabilities poses unique challenges for approaches to fairness, accountability, and transparency. Many disabled people choose not to disclose their disabilities, making auditing and accountability tools particularly hard to design and operate. Further, the variety inherent in disability poses challenges for collecting representative training data in any quantity sufficient to better train more inclusive and accountable algorithms. This panel highlights areas of concern, present emerging research efforts, and enlist more researchers and advocates to study the potential impacts of algorithmic decision-making on people with disabilities. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity. In the technology space, discussion topics will include methods to assess the fairness of current AI systems, and strategies to develop new systems and bias mitigation approaches that ensure fairness for people with disabilities. For example, how do today's currently-deployed AI systems impact people with disabilities? If developing inclusive datasets is part of the solution, how can researchers ethically gather such data, and what risks might centralizing data about disability pose? What new privacy solutions must developers create to reduce the risk of deductive disclosure of identities of people with disabilities in \"anonymized\" datasets? How can AI models and bias mitigation techniques be developed that handle the unique challenges of disability, i.e., the \"long tail\" and low incidence of many types of disability - for instance, how do we ensure that data about disability are not treated as outliers? What are the pros and cons of developing custom/personalized AI models for people with disabilities versus ensuring that general models are inclusive? In the law and policy space, the framework for people with disabilities requires specific study. For example, the Americans with Disabilities Act (ADA) requires employers to adopt \"reasonable accommodations\" for qualified individuals with a disability. But what is a \"reasonable accommodation\" in the context of machine learning and AI? How will the ADA's unique standards interact with case law and scholarship about algorithmic bias against other protected groups? When the ADA governs what questions employers can ask about a candidate's disability, and HIPAA and the Genetic Information Privacy Act regulate the sharing of health information, how should we think about inferences from data that approximate such questions? Panelists will bring varied perspectives to this conversation, including backgrounds in computer science, disability studies, legal studies, and activism. In addition to their scholarly expertise, several panelists have direct lived experience with disability. The session format will consist of brief position statements from each panelist, followed by questions from the moderator, and then open questions from and discussion with the audience.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"224 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375686","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

It is vital to consider the unique risks and impacts of algorithmic decision-making for people with disabilities. The diverse nature of potential disabilities poses unique challenges for approaches to fairness, accountability, and transparency. Many disabled people choose not to disclose their disabilities, making auditing and accountability tools particularly hard to design and operate. Further, the variety inherent in disability poses challenges for collecting representative training data in any quantity sufficient to better train more inclusive and accountable algorithms. This panel highlights areas of concern, present emerging research efforts, and enlist more researchers and advocates to study the potential impacts of algorithmic decision-making on people with disabilities. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity. In the technology space, discussion topics will include methods to assess the fairness of current AI systems, and strategies to develop new systems and bias mitigation approaches that ensure fairness for people with disabilities. For example, how do today's currently-deployed AI systems impact people with disabilities? If developing inclusive datasets is part of the solution, how can researchers ethically gather such data, and what risks might centralizing data about disability pose? What new privacy solutions must developers create to reduce the risk of deductive disclosure of identities of people with disabilities in "anonymized" datasets? How can AI models and bias mitigation techniques be developed that handle the unique challenges of disability, i.e., the "long tail" and low incidence of many types of disability - for instance, how do we ensure that data about disability are not treated as outliers? What are the pros and cons of developing custom/personalized AI models for people with disabilities versus ensuring that general models are inclusive? In the law and policy space, the framework for people with disabilities requires specific study. For example, the Americans with Disabilities Act (ADA) requires employers to adopt "reasonable accommodations" for qualified individuals with a disability. But what is a "reasonable accommodation" in the context of machine learning and AI? How will the ADA's unique standards interact with case law and scholarship about algorithmic bias against other protected groups? When the ADA governs what questions employers can ask about a candidate's disability, and HIPAA and the Genetic Information Privacy Act regulate the sharing of health information, how should we think about inferences from data that approximate such questions? Panelists will bring varied perspectives to this conversation, including backgrounds in computer science, disability studies, legal studies, and activism. In addition to their scholarly expertise, several panelists have direct lived experience with disability. The session format will consist of brief position statements from each panelist, followed by questions from the moderator, and then open questions from and discussion with the audience.
以算法公平、问责制和透明度为中心的残疾视角
考虑算法决策对残疾人的独特风险和影响至关重要。潜在残疾的多样性对公平、问责制和透明度的方法提出了独特的挑战。许多残疾人选择不披露自己的残疾,这使得审计和问责工具特别难以设计和操作。此外,残疾固有的多样性对收集足够数量的代表性训练数据提出了挑战,这些数据足以更好地训练更具包容性和可问责的算法。该小组强调了关注的领域,介绍了新兴的研究成果,并招募了更多的研究人员和倡导者来研究算法决策对残疾人的潜在影响。一个关键目标是提出新的研究项目和合作,包括将关键的残疾观点纳入现有的研究和倡导工作,重点是确定偏见的来源和促进公平。在技术领域,讨论主题将包括评估当前人工智能系统公平性的方法,以及开发新系统的战略和减少偏见的方法,以确保残疾人的公平性。例如,目前部署的人工智能系统如何影响残疾人?如果开发包容性数据集是解决方案的一部分,那么研究人员如何在道德上收集这些数据,以及将残疾数据集中可能带来什么风险?开发人员必须创造哪些新的隐私解决方案,以减少在“匿名”数据集中演绎披露残疾人身份的风险?如何开发人工智能模型和减轻偏见的技术,以应对残疾的独特挑战,即"长尾"和多种残疾的低发病率——例如,我们如何确保有关残疾的数据不被视为异常值?为残疾人开发定制/个性化的人工智能模型与确保通用模型的包容性有什么利弊?在法律和政策领域,残疾人框架需要具体研究。例如,《美国残疾人法案》(ADA)要求雇主为合格的残疾人提供“合理的便利”。但在机器学习和人工智能的背景下,什么是“合理的适应”?《美国残疾人法》的独特标准将如何与判例法和关于算法对其他受保护群体的偏见的学术研究相互作用?当《美国残疾人法》规定雇主可以询问候选人的残疾问题时,HIPAA和《遗传信息隐私法》规定了健康信息的共享,我们应该如何考虑从数据中得出的近似这些问题的推断?小组成员将为这次对话带来不同的观点,包括计算机科学、残疾研究、法律研究和行动主义的背景。除了他们的学术专长,一些小组成员有直接的残疾生活经验。会议形式将包括每个小组成员的简短立场陈述,然后是主持人的问题,然后是观众的开放式问题和讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信