Governance for anti-racist AI in healthcare: integrating racism-related stress in psychiatric algorithms for Black Americans.

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES
Frontiers in digital health Pub Date : 2025-05-15 eCollection Date: 2025-01-01 DOI:10.3389/fdgth.2025.1492736
Christopher T Fields, Carmen Black, Jannat K Thind, Oluwole Jegede, Damla Aksen, Matthew Rosenblatt, Shervin Assari, Chyrell Bellamy, Elijah Anderson, Avram Holmes, Dustin Scheinost
{"title":"Governance for anti-racist AI in healthcare: integrating racism-related stress in psychiatric algorithms for Black Americans.","authors":"Christopher T Fields, Carmen Black, Jannat K Thind, Oluwole Jegede, Damla Aksen, Matthew Rosenblatt, Shervin Assari, Chyrell Bellamy, Elijah Anderson, Avram Holmes, Dustin Scheinost","doi":"10.3389/fdgth.2025.1492736","DOIUrl":null,"url":null,"abstract":"<p><p>While the world is aware of America's history of enslavement, the ongoing impact of anti-Black racism in the United States remains underemphasized in health intervention modeling. This Perspective argues that algorithmic bias-manifested in the worsened performance of clinical algorithms for Black vs. white patients-is significantly driven by the failure to model the cumulative impacts of racism-related stress, particularly racial heteroscedasticity. Racial heteroscedasticity refers to the unequal variance in health outcomes and algorithmic predictions across racial groups, driven by differential exposure to racism-related stress. This may be particularly salient for Black Americans, where anti-Black bias has wide-ranging impacts that interact with differing backgrounds of generational trauma, socioeconomic status, and other social factors, promoting unaccounted for sources of variance that are not easily captured with a blanket \"race\" factor. Not accounting for these factors deteriorates performance for these clinical algorithms for all Black patients. We outline key principles for anti-racist AI governance in healthcare, including: (1) mandating the inclusion of Black researchers and community members in AI development; (2) implementing rigorous audits to assess anti-Black bias; (3) requiring transparency in how algorithms process race-related data; and (4) establishing accountability measures that prioritize equitable outcomes for Black patients. By integrating these principles, AI can be developed to produce more equitable and culturally responsive healthcare interventions. This anti-racist approach challenges policymakers, researchers, clinicians, and AI developers to fundamentally rethink how AI is created, used, and regulated in healthcare, with profound implications for health policy, clinical practice, and patient outcomes across all medical domains.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1492736"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119476/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1492736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

While the world is aware of America's history of enslavement, the ongoing impact of anti-Black racism in the United States remains underemphasized in health intervention modeling. This Perspective argues that algorithmic bias-manifested in the worsened performance of clinical algorithms for Black vs. white patients-is significantly driven by the failure to model the cumulative impacts of racism-related stress, particularly racial heteroscedasticity. Racial heteroscedasticity refers to the unequal variance in health outcomes and algorithmic predictions across racial groups, driven by differential exposure to racism-related stress. This may be particularly salient for Black Americans, where anti-Black bias has wide-ranging impacts that interact with differing backgrounds of generational trauma, socioeconomic status, and other social factors, promoting unaccounted for sources of variance that are not easily captured with a blanket "race" factor. Not accounting for these factors deteriorates performance for these clinical algorithms for all Black patients. We outline key principles for anti-racist AI governance in healthcare, including: (1) mandating the inclusion of Black researchers and community members in AI development; (2) implementing rigorous audits to assess anti-Black bias; (3) requiring transparency in how algorithms process race-related data; and (4) establishing accountability measures that prioritize equitable outcomes for Black patients. By integrating these principles, AI can be developed to produce more equitable and culturally responsive healthcare interventions. This anti-racist approach challenges policymakers, researchers, clinicians, and AI developers to fundamentally rethink how AI is created, used, and regulated in healthcare, with profound implications for health policy, clinical practice, and patient outcomes across all medical domains.

医疗保健领域反种族主义人工智能的治理:将种族主义相关压力整合到美国黑人的精神病学算法中。
虽然全世界都知道美国的奴隶制历史,但在健康干预模型中,美国反黑人种族主义的持续影响仍未得到重视。本观点认为,算法偏差——表现在黑人和白人患者的临床算法表现恶化——在很大程度上是由于未能模拟种族主义相关压力的累积影响,特别是种族异方差。种族异方差是指不同种族群体的健康结果和算法预测的不平等差异,这是由与种族主义有关的压力的不同暴露所驱动的。这对美国黑人来说可能尤其突出,在那里,反黑人偏见具有广泛的影响,与代际创伤、社会经济地位和其他社会因素的不同背景相互作用,促进了无法解释的差异来源,这些差异不容易被笼统的“种族”因素所捕捉。不考虑这些因素会使所有黑人患者的这些临床算法的性能恶化。我们概述了医疗保健领域反种族主义人工智能治理的关键原则,包括:(1)要求将黑人研究人员和社区成员纳入人工智能开发;(2)实施严格的审计,以评估反黑人偏见;(3)要求算法处理种族相关数据的方式透明化;(4)建立问责措施,优先考虑黑人患者的公平结果。通过整合这些原则,可以开发人工智能,以产生更公平和符合文化的卫生保健干预措施。这种反种族主义的方法要求政策制定者、研究人员、临床医生和人工智能开发人员从根本上重新思考人工智能在医疗保健领域的创建、使用和监管方式,这将对所有医疗领域的卫生政策、临床实践和患者结果产生深远影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信