Algorithmic Disability Discrimination

Mason Marks
{"title":"Algorithmic Disability Discrimination","authors":"Mason Marks","doi":"10.2139/ssrn.3338209","DOIUrl":null,"url":null,"abstract":"Prior to the Digital Age, disability-related information flowed between people with disabilities and their doctors, family members, and friends. However, in the 21st century, artificial intelligence tools allow corporations that collect and analyze consumer data to bypass privacy and antidiscrimination laws, such as HIPAA and the ADA, and infer consumers’ disabilities without their knowledge or consent. When people make purchases, browse the Internet, or post on social media, they leave behind trails of digital traces that reflect where they have been and what they have done. Companies aggregate and analyze those traces using AI to reveal details about people’s physical and mental health. I describe this process as mining for “emergent medical data” (EMD) because digital traces have emergent properties; when analyzed by machine learning, they reveal information that is greater than the sum of their parts. \n \nEMD collected from disabled people can serve as a means of sorting them into categories that are assigned positive or negative weights before being used in automated decision making. Negatively weighted categories can stigmatize disabled people and contribute to the narrative that disabilities are bad. Moreover, by negatively weighting categories into which disabled people are sorted, algorithms may stigmatize disabled people and screen them out of life opportunities without considering their desires or qualifications. \n \nThis chapter explains how AI disrupts the traditional flow of disability-related data to promote algorithmic disability discrimination. It presents and analyzes four legislative solutions to the problem: Amend Title III of the ADA to include internet business within the law’s definition of places of public accommodation, expand the scope of HIPAA’s covered entities to include companies that mine for EMD, impose fiduciary duties on internet platforms and other businesses that infer health data, and establish general data protection regulations in the US inspired by the EU’s General Data Protection Regulation (GDPR) and the California Consumer Protection Act of 2018 (CCPA). \n \nRegardless of the regulatory path chosen, we must evolve our understanding of health information and disability-related data. Whether it is exchanged between patients and doctors or pieced together by AI from the digital traces scattered throughout the Internet, the data of people with disabilities deserves protection. Health data has the potential to harm people if used to exploit rather than to heal, and companies can increasingly mine EMD and use it to reduce the autonomy of people with disabilities. Members of this group should be able to control when and how their data is used to draw conclusions about them and make decisions for them. Otherwise, AI-based inferences will contribute to the obstacles that people with disabilities must overcome in their daily lives.","PeriodicalId":138093,"journal":{"name":"Disability, Health, Law, and Bioethics","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Disability, Health, Law, and Bioethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3338209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Prior to the Digital Age, disability-related information flowed between people with disabilities and their doctors, family members, and friends. However, in the 21st century, artificial intelligence tools allow corporations that collect and analyze consumer data to bypass privacy and antidiscrimination laws, such as HIPAA and the ADA, and infer consumers’ disabilities without their knowledge or consent. When people make purchases, browse the Internet, or post on social media, they leave behind trails of digital traces that reflect where they have been and what they have done. Companies aggregate and analyze those traces using AI to reveal details about people’s physical and mental health. I describe this process as mining for “emergent medical data” (EMD) because digital traces have emergent properties; when analyzed by machine learning, they reveal information that is greater than the sum of their parts. EMD collected from disabled people can serve as a means of sorting them into categories that are assigned positive or negative weights before being used in automated decision making. Negatively weighted categories can stigmatize disabled people and contribute to the narrative that disabilities are bad. Moreover, by negatively weighting categories into which disabled people are sorted, algorithms may stigmatize disabled people and screen them out of life opportunities without considering their desires or qualifications. This chapter explains how AI disrupts the traditional flow of disability-related data to promote algorithmic disability discrimination. It presents and analyzes four legislative solutions to the problem: Amend Title III of the ADA to include internet business within the law’s definition of places of public accommodation, expand the scope of HIPAA’s covered entities to include companies that mine for EMD, impose fiduciary duties on internet platforms and other businesses that infer health data, and establish general data protection regulations in the US inspired by the EU’s General Data Protection Regulation (GDPR) and the California Consumer Protection Act of 2018 (CCPA). Regardless of the regulatory path chosen, we must evolve our understanding of health information and disability-related data. Whether it is exchanged between patients and doctors or pieced together by AI from the digital traces scattered throughout the Internet, the data of people with disabilities deserves protection. Health data has the potential to harm people if used to exploit rather than to heal, and companies can increasingly mine EMD and use it to reduce the autonomy of people with disabilities. Members of this group should be able to control when and how their data is used to draw conclusions about them and make decisions for them. Otherwise, AI-based inferences will contribute to the obstacles that people with disabilities must overcome in their daily lives.
算法残障歧视
在数字时代之前,与残疾有关的信息在残疾人和他们的医生、家人和朋友之间流动。然而,在21世纪,人工智能工具允许收集和分析消费者数据的公司绕过HIPAA和ADA等隐私和反歧视法律,在消费者不知情或不同意的情况下推断消费者的残疾。当人们购物、浏览互联网或在社交媒体上发帖时,他们留下的数字痕迹反映了他们去过的地方和做过的事情。公司利用人工智能汇总和分析这些痕迹,以揭示人们身心健康的细节。我将这个过程描述为挖掘“紧急医疗数据”(EMD),因为数字痕迹具有紧急属性;当通过机器学习进行分析时,它们揭示的信息大于其各部分的总和。从残疾人士收集的EMD,可作为将残疾人士分类的一种手段,在用于自动决策之前,可将残疾人士的EMD分为正负两类。负面加权的分类会给残疾人带来污名化,并助长残疾是坏事的说法。此外,通过对残疾人进行分类的负面加权,算法可能会使残疾人蒙受耻辱,并在不考虑他们的愿望或资格的情况下将他们排除在生活机会之外。本章解释了人工智能如何破坏传统的残疾相关数据流,从而促进算法上的残疾歧视。本文提出并分析了解决这一问题的四种立法方案:修订《美国残疾人法》第三章,将互联网业务纳入法律对公共场所的定义,扩大HIPAA所涵盖实体的范围,将挖掘EMD的公司包括在内,对互联网平台和其他推断健康数据的企业施加信托责任,并在欧盟《通用数据保护条例》(GDPR)和2018年《加州消费者保护法》(CCPA)的启发下,在美国建立一般数据保护法规。无论选择何种监管途径,我们都必须增进对健康信息和残疾相关数据的理解。无论是病人和医生之间的交流,还是由人工智能从遍布互联网的数字痕迹中拼凑出来的,残疾人的数据都值得保护。如果将健康数据用于利用而非治疗,则有可能伤害人们,公司可以越来越多地挖掘EMD并利用它来减少残疾人的自主权。这个群体的成员应该能够控制何时以及如何使用他们的数据来得出关于他们的结论并为他们做出决定。否则,基于人工智能的推理将助长残疾人在日常生活中必须克服的障碍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信