Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.

IF 3.4 2区 哲学 Q1 ETHICS
Yves Saint James Aquino, Stacy M Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang, Wendy A Rogers
{"title":"Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.","authors":"Yves Saint James Aquino, Stacy M Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang, Wendy A Rogers","doi":"10.1136/jme-2022-108850","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).</p><p><strong>Objectives: </strong>Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.</p><p><strong>Methodology: </strong>The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.</p><p><strong>Results: </strong>Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias.</p><p><strong>Conclusion/significance: </strong>Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"420-428"},"PeriodicalIF":3.4000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12171461/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2022-108850","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).

Objectives: Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.

Methodology: The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.

Results: Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias.

Conclusion/significance: Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.

医疗人工智能算法偏差的实践、认识论和规范性影响:对多学科专家观点的定性研究。
背景:人工智能(AI)在医疗保健领域的应用可能会使代表性不足的边缘化群体(如基于性别或种族的群体)处于不利地位,这一点日益受到关注:我们的目标是对利益相关者在试图减轻算法偏见时所认可的一系列策略进行调查,并考虑算法偏见责任的伦理问题:研究对医疗工作者、筛查项目管理人员、消费者健康代表、监管者、数据科学家和开发人员进行了深入的半结构式访谈:结果:研究结果表明,在三个关键问题上存在相当大的意见分歧。首先,关于医疗人工智能中是否存在偏见问题的观点各不相同,大多数参与者同意偏见是一个问题(我们称之为偏见批判观点),少数人持相反观点(偏见否定观点),还有一些人认为人工智能的益处大于偏见问题带来的任何伤害或错误(偏见专家观点)。其次,在减少偏见的策略以及由谁负责这些策略的问题上也存在分歧。最后,对于在开发人工智能的过程中纳入或排除社会文化标识符(如种族、民族或性别差异标识符)作为减轻偏见的一种方式,也存在不同意见:根据与会者的观点,我们提出了利益相关者可能采取的应对措施,包括加强跨学科合作、开展有针对性的利益相关者参与活动、开展实证研究以了解算法偏见,以及制定战略以修改人工智能开发中的主流方法(如使用参与式方法),并在研究团队和研究参与者的招募与选择中加强多样性和包容性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信