What makes clinical machine learning fair? A practical ethics framework.

PLOS digital health Pub Date : 2025-03-18 eCollection Date: 2025-03-01 DOI:10.1371/journal.pdig.0000728
Marine Hoche, Olga Mineeva, Gunnar Rätsch, Effy Vayena, Alessandro Blasimme
{"title":"What makes clinical machine learning fair? A practical ethics framework.","authors":"Marine Hoche, Olga Mineeva, Gunnar Rätsch, Effy Vayena, Alessandro Blasimme","doi":"10.1371/journal.pdig.0000728","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning (ML) can offer a tremendous contribution to medicine by streamlining decision-making, reducing mistakes, improving clinical accuracy and ensuring better patient outcomes. The prospects of a widespread and rapid integration of machine learning in clinical workflow have attracted considerable attention including due to complex ethical implications-algorithmic bias being among the most frequently discussed ML models. Here we introduce and discuss a practical ethics framework inductively-generated via normative analysis of the practical challenges in developing an actual clinical ML model (see case study). The framework is usable to identify, measure and address bias in clinical machine learning models, thus improving fairness as to both model performance and health outcomes. We detail a proportionate approach to ML bias by defining the demands of fair ML in light of what is ethically justifiable and, at the same time, technically feasible in light of inevitable trade-offs. Our framework enables ethically robust and transparent decision-making both in the design and the context-dependent aspects of ML bias mitigation, thus improving accountability for both developers and clinical users.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 3","pages":"e0000728"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11918422/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000728","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) can offer a tremendous contribution to medicine by streamlining decision-making, reducing mistakes, improving clinical accuracy and ensuring better patient outcomes. The prospects of a widespread and rapid integration of machine learning in clinical workflow have attracted considerable attention including due to complex ethical implications-algorithmic bias being among the most frequently discussed ML models. Here we introduce and discuss a practical ethics framework inductively-generated via normative analysis of the practical challenges in developing an actual clinical ML model (see case study). The framework is usable to identify, measure and address bias in clinical machine learning models, thus improving fairness as to both model performance and health outcomes. We detail a proportionate approach to ML bias by defining the demands of fair ML in light of what is ethically justifiable and, at the same time, technically feasible in light of inevitable trade-offs. Our framework enables ethically robust and transparent decision-making both in the design and the context-dependent aspects of ML bias mitigation, thus improving accountability for both developers and clinical users.

是什么让临床机器学习公平?一个实用的伦理框架。
机器学习(ML)可以通过简化决策、减少错误、提高临床准确性和确保更好的患者治疗结果,为医学做出巨大贡献。机器学习在临床工作流程中广泛和快速集成的前景已经引起了相当大的关注,包括由于复杂的伦理影响-算法偏差是最常讨论的ML模型之一。在这里,我们介绍并讨论了一个实用的伦理框架,该框架是通过对开发实际临床ML模型的实际挑战进行规范分析而归纳产生的(见案例研究)。该框架可用于识别、衡量和解决临床机器学习模型中的偏见,从而提高模型性能和健康结果的公平性。我们详细介绍了一种针对ML偏见的相称方法,根据道德上合理的要求定义公平ML的要求,同时根据不可避免的权衡,在技术上可行。我们的框架在设计和ML偏见缓解的上下文相关方面都实现了道德上稳健和透明的决策,从而提高了开发人员和临床用户的问责制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信