{"title":"BiasPruner:在持续学习中减轻公平医学图像分析的偏见转移","authors":"Nourhan Bayasi , Jamil Fayyad , Alceu Bissoto , Ghassan Hamarneh , Rafeef Garbi","doi":"10.1016/j.media.2025.103764","DOIUrl":null,"url":null,"abstract":"<div><div>Continual Learning (CL) enables neural networks to learn new tasks while retaining previous knowledge. However, most CL methods fail to address bias transfer, where spurious correlations propagate to future tasks or influence past knowledge. This bidirectional bias transfer negatively impacts model performance and fairness, especially in medical imaging, where it can lead to misdiagnoses and unequal treatment. In this work, we show that conventional CL methods amplify these biases, posing risks for diverse patient cohorts. To address this, we propose <span>BiasPruner</span>, a framework that mitigates bias propagation through debiased subnetworks, while preserving sequential learning and avoiding catastrophic forgetting. <span>BiasPruner</span> computes a bias attribution score to identify and prune network units responsible for spurious correlations, creating task-specific subnetworks that learn unbiased representations. As new tasks are learned, the framework integrates non-biased units from previous subnetworks to preserve transferable knowledge and prevent bias transfer. During inference, a task-agnostic gating mechanism selects the optimal subnetwork for robust predictions. We evaluate <span>BiasPruner</span> on medical imaging benchmarks, including skin lesion and chest X-ray classification tasks, where biased data (e.g., spurious skin tone correlations) can exacerbate disparities. Our experiments show that <span>BiasPruner</span> outperforms state-of-the-art CL methods in both accuracy and fairness. Code is available at: <span><span>BiasPruner</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103764"},"PeriodicalIF":11.8000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BiasPruner: Mitigating bias transfer in continual learning for fair medical image analysis\",\"authors\":\"Nourhan Bayasi , Jamil Fayyad , Alceu Bissoto , Ghassan Hamarneh , Rafeef Garbi\",\"doi\":\"10.1016/j.media.2025.103764\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Continual Learning (CL) enables neural networks to learn new tasks while retaining previous knowledge. However, most CL methods fail to address bias transfer, where spurious correlations propagate to future tasks or influence past knowledge. This bidirectional bias transfer negatively impacts model performance and fairness, especially in medical imaging, where it can lead to misdiagnoses and unequal treatment. In this work, we show that conventional CL methods amplify these biases, posing risks for diverse patient cohorts. To address this, we propose <span>BiasPruner</span>, a framework that mitigates bias propagation through debiased subnetworks, while preserving sequential learning and avoiding catastrophic forgetting. <span>BiasPruner</span> computes a bias attribution score to identify and prune network units responsible for spurious correlations, creating task-specific subnetworks that learn unbiased representations. As new tasks are learned, the framework integrates non-biased units from previous subnetworks to preserve transferable knowledge and prevent bias transfer. During inference, a task-agnostic gating mechanism selects the optimal subnetwork for robust predictions. We evaluate <span>BiasPruner</span> on medical imaging benchmarks, including skin lesion and chest X-ray classification tasks, where biased data (e.g., spurious skin tone correlations) can exacerbate disparities. Our experiments show that <span>BiasPruner</span> outperforms state-of-the-art CL methods in both accuracy and fairness. Code is available at: <span><span>BiasPruner</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"107 \",\"pages\":\"Article 103764\"},\"PeriodicalIF\":11.8000,\"publicationDate\":\"2025-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S136184152500310X\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136184152500310X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
BiasPruner: Mitigating bias transfer in continual learning for fair medical image analysis
Continual Learning (CL) enables neural networks to learn new tasks while retaining previous knowledge. However, most CL methods fail to address bias transfer, where spurious correlations propagate to future tasks or influence past knowledge. This bidirectional bias transfer negatively impacts model performance and fairness, especially in medical imaging, where it can lead to misdiagnoses and unequal treatment. In this work, we show that conventional CL methods amplify these biases, posing risks for diverse patient cohorts. To address this, we propose BiasPruner, a framework that mitigates bias propagation through debiased subnetworks, while preserving sequential learning and avoiding catastrophic forgetting. BiasPruner computes a bias attribution score to identify and prune network units responsible for spurious correlations, creating task-specific subnetworks that learn unbiased representations. As new tasks are learned, the framework integrates non-biased units from previous subnetworks to preserve transferable knowledge and prevent bias transfer. During inference, a task-agnostic gating mechanism selects the optimal subnetwork for robust predictions. We evaluate BiasPruner on medical imaging benchmarks, including skin lesion and chest X-ray classification tasks, where biased data (e.g., spurious skin tone correlations) can exacerbate disparities. Our experiments show that BiasPruner outperforms state-of-the-art CL methods in both accuracy and fairness. Code is available at: BiasPruner.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.