Considerations for AI fairness for people with disabilities

AI matters Pub Date : 2019-12-06 DOI:10.1145/3362077.3362086
Shari Trewin, Sara H. Basson, Michael J. Muller, Stacy M. Branham, J. Treviranus, D. Gruen, Daniell Hebert, Natalia Lyckowski, Erich Manser
{"title":"Considerations for AI fairness for people with disabilities","authors":"Shari Trewin, Sara H. Basson, Michael J. Muller, Stacy M. Branham, J. Treviranus, D. Gruen, Daniell Hebert, Natalia Lyckowski, Erich Manser","doi":"10.1145/3362077.3362086","DOIUrl":null,"url":null,"abstract":"In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human-centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"40-63"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3362077.3362086","citationCount":"55","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI matters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3362077.3362086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 55

Abstract

In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human-centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.
对残疾人人工智能公平性的考虑
在当今社会,残疾人可能面临歧视。随着人工智能解决方案在决策和互动中发挥越来越重要的作用,它们有可能对社会中残疾人的公平待遇产生积极和消极的影响。我们描述了四个新兴人工智能应用领域的一些机遇和风险:就业、教育、公共安全和医疗保健,这些都是在与各种残疾参与者的研讨会上确定的。在许多现有情况下,非AI解决方案已经具有歧视性,而引入AI则有延续和复制这些缺陷的风险。接下来,我们将讨论在整个人工智能开发生命周期中支持残疾背景下的公平性的策略。应该在更广泛的使用背景下审查人工智能系统对用户的潜在影响。它们应该提供纠正错误的机会,并让用户和受影响的人提出公平问题。在获取数据以建立模型和进行测试时,应将残疾人纳入其中,以创建一个更具包容性和更强大的系统。最后,我们提供了一些关于以人为中心的设计过程和哲学的文献,这些文献可以帮助人工智能和机器学习工程师创新算法,减少伤害并最终改善残疾人的生活。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信