权衡在临床环境中为医疗人工智能收集种族和民族数据的利弊

IF 23.8 1区 医学 Q1 MEDICAL INFORMATICS
Amelia Fiske PhD , Sarah Blacker PhD , Lester Darryl Geneviève PhD , Theresa Willem MA , Marie-Christine Fritzsche , Alena Buyx MD , Leo Anthony Celi MD , Stuart McLennan PhD
{"title":"权衡在临床环境中为医疗人工智能收集种族和民族数据的利弊","authors":"Amelia Fiske PhD ,&nbsp;Sarah Blacker PhD ,&nbsp;Lester Darryl Geneviève PhD ,&nbsp;Theresa Willem MA ,&nbsp;Marie-Christine Fritzsche ,&nbsp;Alena Buyx MD ,&nbsp;Leo Anthony Celi MD ,&nbsp;Stuart McLennan PhD","doi":"10.1016/j.landig.2025.01.003","DOIUrl":null,"url":null,"abstract":"<div><div>Many countries around the world do not collect race and ethnicity data in clinical settings. Without such identified data, it is difficult to identify biases in the training data or output of a given artificial intelligence (AI) algorithm, and to work towards medical AI tools that do not exclude or further harm marginalised groups. However, the collection of these data also poses specific risks to racially minoritised populations and other marginalised groups. This Viewpoint weighs the risks of collecting race and ethnicity data in clinical settings against the risks of not collecting those data. The collection of more comprehensive identified data (ie, data that include personal attributes such as race, ethnicity, and sex) has the possibility to benefit racially minoritised populations that have historically faced worse health outcomes and health-care access, and inadequate representation in research. However, the collection of extensive demographic data raises important concerns that include the construction of intersectional social categories (ie, race and its shifting meaning in different sociopolitical contexts), the risks of biological reductionism, and the potential for misuse, particularly in situations of historical exclusion, violence, conflict, genocide, and colonialism. Careful navigation of identified data collection is key to building better AI algorithms and to work towards medicine that does not exclude or harm marginalised groups.</div></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"7 4","pages":"Pages e286-e294"},"PeriodicalIF":23.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Weighing the benefits and risks of collecting race and ethnicity data in clinical settings for medical artificial intelligence\",\"authors\":\"Amelia Fiske PhD ,&nbsp;Sarah Blacker PhD ,&nbsp;Lester Darryl Geneviève PhD ,&nbsp;Theresa Willem MA ,&nbsp;Marie-Christine Fritzsche ,&nbsp;Alena Buyx MD ,&nbsp;Leo Anthony Celi MD ,&nbsp;Stuart McLennan PhD\",\"doi\":\"10.1016/j.landig.2025.01.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Many countries around the world do not collect race and ethnicity data in clinical settings. Without such identified data, it is difficult to identify biases in the training data or output of a given artificial intelligence (AI) algorithm, and to work towards medical AI tools that do not exclude or further harm marginalised groups. However, the collection of these data also poses specific risks to racially minoritised populations and other marginalised groups. This Viewpoint weighs the risks of collecting race and ethnicity data in clinical settings against the risks of not collecting those data. The collection of more comprehensive identified data (ie, data that include personal attributes such as race, ethnicity, and sex) has the possibility to benefit racially minoritised populations that have historically faced worse health outcomes and health-care access, and inadequate representation in research. However, the collection of extensive demographic data raises important concerns that include the construction of intersectional social categories (ie, race and its shifting meaning in different sociopolitical contexts), the risks of biological reductionism, and the potential for misuse, particularly in situations of historical exclusion, violence, conflict, genocide, and colonialism. Careful navigation of identified data collection is key to building better AI algorithms and to work towards medicine that does not exclude or harm marginalised groups.</div></div>\",\"PeriodicalId\":48534,\"journal\":{\"name\":\"Lancet Digital Health\",\"volume\":\"7 4\",\"pages\":\"Pages e286-e294\"},\"PeriodicalIF\":23.8000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Lancet Digital Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589750025000032\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589750025000032","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

世界上许多国家在临床环境中不收集种族和民族数据。如果没有这些已确定的数据,就很难在训练数据或给定人工智能算法的输出中发现偏见,并努力开发不排斥或进一步伤害边缘化群体的医疗人工智能工具。然而,这些数据的收集也给少数族裔和其他边缘群体带来了特定的风险。这一观点权衡了在临床环境中收集种族和民族数据的风险与不收集这些数据的风险。收集更全面的已确定数据(即包括种族、族裔和性别等个人属性的数据)有可能使少数族裔人口受益,这些人口在历史上面临着较差的健康结果和获得医疗保健的机会,并且在研究中代表性不足。然而,大量人口统计数据的收集引起了重要的关注,包括交叉社会类别的构建(即种族及其在不同社会政治背景下的意义变化),生物还原论的风险,以及滥用的可能性,特别是在历史排斥,暴力,冲突,种族灭绝和殖民主义的情况下。对已识别的数据收集进行谨慎导航,是构建更好的人工智能算法和致力于不排斥或伤害边缘群体的医学的关键。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Weighing the benefits and risks of collecting race and ethnicity data in clinical settings for medical artificial intelligence
Many countries around the world do not collect race and ethnicity data in clinical settings. Without such identified data, it is difficult to identify biases in the training data or output of a given artificial intelligence (AI) algorithm, and to work towards medical AI tools that do not exclude or further harm marginalised groups. However, the collection of these data also poses specific risks to racially minoritised populations and other marginalised groups. This Viewpoint weighs the risks of collecting race and ethnicity data in clinical settings against the risks of not collecting those data. The collection of more comprehensive identified data (ie, data that include personal attributes such as race, ethnicity, and sex) has the possibility to benefit racially minoritised populations that have historically faced worse health outcomes and health-care access, and inadequate representation in research. However, the collection of extensive demographic data raises important concerns that include the construction of intersectional social categories (ie, race and its shifting meaning in different sociopolitical contexts), the risks of biological reductionism, and the potential for misuse, particularly in situations of historical exclusion, violence, conflict, genocide, and colonialism. Careful navigation of identified data collection is key to building better AI algorithms and to work towards medicine that does not exclude or harm marginalised groups.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
41.20
自引率
1.60%
发文量
232
审稿时长
13 weeks
期刊介绍: The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health. The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health. We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信