Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions.

Journal of law and health Pub Date : 2021-01-01
Sahar Takshi
{"title":"Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions.","authors":"Sahar Takshi","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.</p>","PeriodicalId":73804,"journal":{"name":"Journal of law and health","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of law and health","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.

意想不到的不平等:人工智能在医疗保健决策中的差异影响。
医疗保健领域的系统性歧视困扰着边缘群体。医生错误地认为有色人种具有较高的疼痛耐受性,导致治疗不足。残疾妇女往往没有得到诊断,因为她们的症状被忽视了。低收入患者获得适当治疗的机会较少。这些模式和其他模式反映了长期以来在美国卫生系统中根深蒂固的差异。随着医疗保健行业采用人工智能和算法形成(AI)工具,监管机构解决医疗保健歧视问题至关重要。人工智能工具越来越多地用于医院、医生和保险公司的临床和行政决策,但目前还没有一个框架明确规定人工智能用户的非歧视义务。美国食品和药物管理局监管人工智能的权力有限,也没有寻求将反歧视原则纳入其指导方针。《平价医疗法案》第1557条并未被用于强制执行医疗保健人工智能中的不歧视,民权办公室也未充分利用这一条款。州一级医疗执照委员会的保护或医疗事故责任同样未经检验,而且尚未将不歧视义务扩大到人工智能。本文讨论了医疗保健人工智能中每项法律义务的作用,以及每个系统可以改进以解决歧视的方式。它强调了行业可以通过自我监管来制定非歧视标准的方式,并通过推荐标准和创建超级监管机构来解决人工智能带来的差异化影响。随着世界走向自动化,当务之急是消除对系统性歧视的持续担忧,以防止医疗保健领域进一步边缘化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信