Sangwoo Hong, Sehwan Kim, Hyungjun Joo, Hyeonggeun Han, Jiyoon Shin, Yoav Wald, Jungwoo Lee
{"title":"Bias Alleviation through Network Pruning for Sparse and Debiased Models.","authors":"Sangwoo Hong, Sehwan Kim, Hyungjun Joo, Hyeonggeun Han, Jiyoon Shin, Yoav Wald, Jungwoo Lee","doi":"10.1109/TIP.2026.3687070","DOIUrl":null,"url":null,"abstract":"<p><p>Pruning is a highly effective method for reducing the size of neural networks with negligible impact on their average performance. However, recent studies have revealed that pruning actually amplifies the bias in the models, leading to decreased performance for underrepresented groups. To address this issue, we first analyze the impact of pruning on the confidence of each sample and introduce Accumulated Confidence (AC). AC is a proxy that facilitates the identification of bias-conflicting and bias-aligned samples without relying on group annotations. We then propose a debiasing algorithm, which is called DEbiasing Network through Pruning (DENP). DENP utilizes AC to mitigate bias within the network. Even without bias information, DENP exhibits remarkable debiasing performance on varying levels of sparsity, effectively mitigating the bias-exacerbating property of pruning and resulting in both sparse and debiased neural networks. Moreover, even when compared with state-of-the-art debiasing baselines under identical conditions, the DENP still achieves the best performance on multiple benchmark datasets, demonstrating its superior debiasing capabilities.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7000,"publicationDate":"2026-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TIP.2026.3687070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pruning is a highly effective method for reducing the size of neural networks with negligible impact on their average performance. However, recent studies have revealed that pruning actually amplifies the bias in the models, leading to decreased performance for underrepresented groups. To address this issue, we first analyze the impact of pruning on the confidence of each sample and introduce Accumulated Confidence (AC). AC is a proxy that facilitates the identification of bias-conflicting and bias-aligned samples without relying on group annotations. We then propose a debiasing algorithm, which is called DEbiasing Network through Pruning (DENP). DENP utilizes AC to mitigate bias within the network. Even without bias information, DENP exhibits remarkable debiasing performance on varying levels of sparsity, effectively mitigating the bias-exacerbating property of pruning and resulting in both sparse and debiased neural networks. Moreover, even when compared with state-of-the-art debiasing baselines under identical conditions, the DENP still achieves the best performance on multiple benchmark datasets, demonstrating its superior debiasing capabilities.
修剪是一种非常有效的减小神经网络规模的方法,对其平均性能的影响可以忽略不计。然而,最近的研究表明,修剪实际上放大了模型中的偏差,导致代表性不足的群体的表现下降。为了解决这个问题,我们首先分析修剪对每个样本置信度的影响,并引入累积置信度(AC)。AC是一个代理,它可以在不依赖于组注释的情况下方便地识别偏差冲突和偏差对齐的样本。然后,我们提出了一种去偏算法,称为debiasing Network through Pruning (DENP)。DENP利用交流来减轻网络中的偏置。即使没有偏倚信息,DENP在不同的稀疏度水平上也表现出显著的去偏性能,有效地减轻了剪枝的偏置加剧特性,从而产生了稀疏和去偏的神经网络。此外,即使在相同条件下与最先进的去偏基线进行比较,DENP在多个基准数据集上仍然取得了最佳性能,证明了其优越的去偏能力。