Bin Li, Xiaoqian Jiang, Kai Zhang, Arif O Harmanci, Bradley Malin, Hongchang Gao, Xinghua Shi
{"title":"Enhancing fairness in disease prediction by optimizing multiple domain adversarial networks.","authors":"Bin Li, Xiaoqian Jiang, Kai Zhang, Arif O Harmanci, Bradley Malin, Hongchang Gao, Xinghua Shi","doi":"10.1371/journal.pdig.0000830","DOIUrl":null,"url":null,"abstract":"<p><p>Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer's Disease and Autism populations than other adversarial networks.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 5","pages":"e0000830"},"PeriodicalIF":7.7000,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12124548/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer's Disease and Autism populations than other adversarial networks.