{"title":"在有偏见的世界中允许有偏见的人工智能:新加坡对糖尿病视网膜病变筛查和转诊的人工智能的伦理分析。","authors":"Kathryn Muyskens, Angela Ballantyne, Julian Savulescu, Harisan Unais Nasir, Anantharaman Muralidharan","doi":"10.1007/s41649-024-00315-3","DOIUrl":null,"url":null,"abstract":"<div><p>A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).</p></div>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":"17 1","pages":"167 - 185"},"PeriodicalIF":1.1000,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11785882/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore\",\"authors\":\"Kathryn Muyskens, Angela Ballantyne, Julian Savulescu, Harisan Unais Nasir, Anantharaman Muralidharan\",\"doi\":\"10.1007/s41649-024-00315-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).</p></div>\",\"PeriodicalId\":44520,\"journal\":{\"name\":\"Asian Bioethics Review\",\"volume\":\"17 1\",\"pages\":\"167 - 185\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11785882/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Asian Bioethics Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s41649-024-00315-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian Bioethics Review","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s41649-024-00315-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore
A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).
期刊介绍:
Asian Bioethics Review (ABR) is an international academic journal, based in Asia, providing a forum to express and exchange original ideas on all aspects of bioethics, especially those relevant to the region. Published quarterly, the journal seeks to promote collaborative research among scholars in Asia or with an interest in Asia, as well as multi-cultural and multi-disciplinary bioethical studies more generally. It will appeal to all working on bioethical issues in biomedicine, healthcare, caregiving and patient support, genetics, law and governance, health systems and policy, science studies and research. ABR provides analyses, perspectives and insights into new approaches in bioethics, recent changes in biomedical law and policy, developments in capacity building and professional training, and voices or essays from a student’s perspective. The journal includes articles, research studies, target articles, case evaluations and commentaries. It also publishes book reviews and correspondence to the editor. ABR welcomes original papers from all countries, particularly those that relate to Asia. ABR is the flagship publication of the Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore. The Centre for Biomedical Ethics is a collaborating centre on bioethics of the World Health Organization.