Enhancing fairness in disease prediction by optimizing multiple domain adversarial networks.

IF 7.7
PLOS digital health Pub Date : 2025-05-30 eCollection Date: 2025-05-01 DOI:10.1371/journal.pdig.0000830
Bin Li, Xiaoqian Jiang, Kai Zhang, Arif O Harmanci, Bradley Malin, Hongchang Gao, Xinghua Shi
{"title":"Enhancing fairness in disease prediction by optimizing multiple domain adversarial networks.","authors":"Bin Li, Xiaoqian Jiang, Kai Zhang, Arif O Harmanci, Bradley Malin, Hongchang Gao, Xinghua Shi","doi":"10.1371/journal.pdig.0000830","DOIUrl":null,"url":null,"abstract":"<p><p>Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer's Disease and Autism populations than other adversarial networks.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 5","pages":"e0000830"},"PeriodicalIF":7.7000,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12124548/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer's Disease and Autism populations than other adversarial networks.

通过优化多域对抗网络提高疾病预测的公平性。
生物医学中的预测模型需要确保为它们所应用的人群提供公平和可靠的结果。然而,用于医学预测的人工智能模型中的偏见可能导致不公平的待遇和扩大的差距,强调需要有效的技术来解决这些问题。然而,目前的方法很难同时减轻生物医学数据中多个敏感特征引起的偏见。为了提高公平性,我们引入了一个基于多域对抗神经网络(MDANN)的框架,该框架包含多个对抗组件。在MDANN中,对抗性模块应用于通过跨多个敏感特征(即,不应导致可能有意或无意导致临床决策差异的预测结果的患者特征)的负梯度反向传播来学习公平模式。MDANN应用基于接收者工作特征曲线下面积(AUC)的损失函数来解决类别不平衡问题,促进少数群体(例如,代表性不足或处于不利地位的人口子集)的公平分类表现。此外,我们利用预训练的卷积自编码器(cae)来提取数据的深度表示,旨在提高预测的准确性和公平性。结合这些机制,我们减轻了多重偏见和差异,以提供可靠和公平的疾病预测。我们的经验证明,与其他对抗网络相比,MDANN方法在使用脑成像数据预测疾病进展方面具有更好的准确性和公平性,并减轻了阿尔茨海默病和自闭症人群的多重人口统计学偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信