The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore

IF 1.1 Q3 ETHICS
Kathryn Muyskens, Angela Ballantyne, Julian Savulescu, Harisan Unais Nasir, Anantharaman Muralidharan
{"title":"The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore","authors":"Kathryn Muyskens,&nbsp;Angela Ballantyne,&nbsp;Julian Savulescu,&nbsp;Harisan Unais Nasir,&nbsp;Anantharaman Muralidharan","doi":"10.1007/s41649-024-00315-3","DOIUrl":null,"url":null,"abstract":"<div><p>A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).</p></div>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":"17 1","pages":"167 - 185"},"PeriodicalIF":1.1000,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11785882/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian Bioethics Review","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s41649-024-00315-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).

在有偏见的世界中允许有偏见的人工智能:新加坡对糖尿病视网膜病变筛查和转诊的人工智能的伦理分析。
在资源配置和公共卫生伦理中,效用与公平是一个重要的伦理矛盾。我们通过检查由新加坡杜克大学-新加坡国立大学的一个研究小组开发的糖尿病视网膜病变人工智能诊断筛查工具,探讨了健康人工智能背景下效用与公平之间的紧张关系。虽然这个工具被发现是有效的,但它并不是对新加坡的每个种族都同样有效,对少数民族马来人的效果不如对多数民族的华人。我们讨论了健康人工智能中偏见的规范性问题,并探讨了偏见与各种形式的社会不平等相互作用的方式。从那里,我们检查糖尿病视网膜病变病例的具体情况,并权衡效用和公平之间的具体权衡。最终,我们得出结论,在某些标准成立的情况下,在道德上允许将效用优先于公平。鉴于任何医疗人工智能都更有可能由于可能反映其他社会不平等的训练数据中的偏差而存在持续偏差,我们认为允许在以下情况下实现带有残余偏差的人工智能工具:(1)它的引入减少了偏差的影响(即使总体不平等恶化),和/或(2)获得的效用足够显著并在群体之间共享(即使不均匀)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.20
自引率
3.40%
发文量
32
期刊介绍: Asian Bioethics Review (ABR) is an international academic journal, based in Asia, providing a forum to express and exchange original ideas on all aspects of bioethics, especially those relevant to the region. Published quarterly, the journal seeks to promote collaborative research among scholars in Asia or with an interest in Asia, as well as multi-cultural and multi-disciplinary bioethical studies more generally. It will appeal to all working on bioethical issues in biomedicine, healthcare, caregiving and patient support, genetics, law and governance, health systems and policy, science studies and research. ABR provides analyses, perspectives and insights into new approaches in bioethics, recent changes in biomedical law and policy, developments in capacity building and professional training, and voices or essays from a student’s perspective. The journal includes articles, research studies, target articles, case evaluations and commentaries. It also publishes book reviews and correspondence to the editor. ABR welcomes original papers from all countries, particularly those that relate to Asia. ABR is the flagship publication of the Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore. The Centre for Biomedical Ethics is a collaborating centre on bioethics of the World Health Organization.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信