What Is Fair? Defining Fairness in Machine Learning for Health.

IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Jianhui Gao, Benson Chou, Zachary R McCaw, Hilary Thurston, Paul Varghese, Chuan Hong, Jessica Gronsbell
{"title":"What Is Fair? Defining Fairness in Machine Learning for Health.","authors":"Jianhui Gao, Benson Chou, Zachary R McCaw, Hilary Thurston, Paul Varghese, Chuan Hong, Jessica Gronsbell","doi":"10.1002/sim.70234","DOIUrl":null,"url":null,"abstract":"<p><p>Ensuring that machine-learning (ML) models are safe, effective, and equitable across all patients is critical for clinical decision-making and for preventing the amplification of existing health disparities. In this work, we examine how fairness is conceptualized in ML for health, including why ML models may lead to unfair decisions and how fairness has been measured in diverse real-world applications. We review commonly used fairness notions within group, individual, and causal-based frameworks. We also discuss the outlook for future research and highlight opportunities and challenges in operationalizing fairness in health-focused applications.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 20-22","pages":"e70234"},"PeriodicalIF":1.8000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12436242/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistics in Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/sim.70234","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Ensuring that machine-learning (ML) models are safe, effective, and equitable across all patients is critical for clinical decision-making and for preventing the amplification of existing health disparities. In this work, we examine how fairness is conceptualized in ML for health, including why ML models may lead to unfair decisions and how fairness has been measured in diverse real-world applications. We review commonly used fairness notions within group, individual, and causal-based frameworks. We also discuss the outlook for future research and highlight opportunities and challenges in operationalizing fairness in health-focused applications.

Abstract Image

Abstract Image

Abstract Image

什么是公平?定义健康机器学习中的公平性。
确保机器学习(ML)模型在所有患者中安全、有效和公平,对于临床决策和防止现有健康差距的扩大至关重要。在这项工作中,我们研究了健康机器学习中的公平性是如何概念化的,包括为什么机器学习模型可能导致不公平的决策,以及如何在不同的现实世界应用中衡量公平性。我们回顾了在群体、个人和基于因果关系的框架中常用的公平概念。我们还讨论了未来研究的前景,并强调了在以健康为重点的应用中实现公平性的机遇和挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Statistics in Medicine
Statistics in Medicine 医学-公共卫生、环境卫生与职业卫生
CiteScore
3.40
自引率
10.00%
发文量
334
审稿时长
2-4 weeks
期刊介绍: The journal aims to influence practice in medicine and its associated sciences through the publication of papers on statistical and other quantitative methods. Papers will explain new methods and demonstrate their application, preferably through a substantive, real, motivating example or a comprehensive evaluation based on an illustrative example. Alternatively, papers will report on case-studies where creative use or technical generalizations of established methodology is directed towards a substantive application. Reviews of, and tutorials on, general topics relevant to the application of statistics to medicine will also be published. The main criteria for publication are appropriateness of the statistical methods to a particular medical problem and clarity of exposition. Papers with primarily mathematical content will be excluded. The journal aims to enhance communication between statisticians, clinicians and medical researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信