{"title":"Evaluating trustworthiness in AI-Based diabetic retinopathy screening: addressing transparency, consent, and privacy challenges.","authors":"Anshul Chauhan, Debarati Sarkar, Garima Singh Verma, Harsh Rastogi, Karthik Adapa, Mona Duggal","doi":"10.1186/s12910-025-01265-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) offers significant potential to drive advancements in healthcare; however, the development and implementation of AI models present complex ethical, legal, social, and technical challenges, as data practices often undermine regulatory frameworks in various regions worldwide. This study explores stakeholder perspectives on the development and deployment of AI algorithms for diabetic retinopathy (DR) screening, with a focus on ethical risks, data practices, governance, and emerging shortcomings in the Global South AI discourse.</p><p><strong>Methods: </strong>Fifteen semi-structured interviews were conducted with ophthalmologists, program officers, AI developers, bioethics experts, and legal professionals. Thematic analysis was guided by OECD principles for responsible AI stewardship. Interviews were analyzed using MAXQDA software to identify themes related to AI trustworthiness and ethical governance.</p><p><strong>Results: </strong>Six key themes emerged regarding the perceived trustworthiness of AI: algorithmic effectiveness, responsible data collection, ethical approval processes, explainability, implementation challenges, and accountability. Participants reported critical shortcomings in AI companies' data collection practices, including a lack of transparency, inadequate consent processes, and limited patient awareness about data ownership. These findings highlight how unchecked data collection and curation practices may reinforce data colonialism in low and middle-income healthcare systems.</p><p><strong>Conclusion: </strong>Ensuring trustworthy AI requires transparent and accountable data practices, robust patient consent mechanisms, and regulatory frameworks aligned with ethical and privacy standards. Addressing these issues is vital to safeguarding patient rights, preventing data misuse, and fostering responsible AI ecosystems in the Global South.</p>","PeriodicalId":55348,"journal":{"name":"BMC Medical Ethics","volume":"26 1","pages":"140"},"PeriodicalIF":3.1000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12532412/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1186/s12910-025-01265-7","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Artificial intelligence (AI) offers significant potential to drive advancements in healthcare; however, the development and implementation of AI models present complex ethical, legal, social, and technical challenges, as data practices often undermine regulatory frameworks in various regions worldwide. This study explores stakeholder perspectives on the development and deployment of AI algorithms for diabetic retinopathy (DR) screening, with a focus on ethical risks, data practices, governance, and emerging shortcomings in the Global South AI discourse.
Methods: Fifteen semi-structured interviews were conducted with ophthalmologists, program officers, AI developers, bioethics experts, and legal professionals. Thematic analysis was guided by OECD principles for responsible AI stewardship. Interviews were analyzed using MAXQDA software to identify themes related to AI trustworthiness and ethical governance.
Results: Six key themes emerged regarding the perceived trustworthiness of AI: algorithmic effectiveness, responsible data collection, ethical approval processes, explainability, implementation challenges, and accountability. Participants reported critical shortcomings in AI companies' data collection practices, including a lack of transparency, inadequate consent processes, and limited patient awareness about data ownership. These findings highlight how unchecked data collection and curation practices may reinforce data colonialism in low and middle-income healthcare systems.
Conclusion: Ensuring trustworthy AI requires transparent and accountable data practices, robust patient consent mechanisms, and regulatory frameworks aligned with ethical and privacy standards. Addressing these issues is vital to safeguarding patient rights, preventing data misuse, and fostering responsible AI ecosystems in the Global South.
期刊介绍:
BMC Medical Ethics is an open access journal publishing original peer-reviewed research articles in relation to the ethical aspects of biomedical research and clinical practice, including professional choices and conduct, medical technologies, healthcare systems and health policies.