Bias Alleviation through Network Pruning for Sparse and Debiased Models.

IF 13.7
Sangwoo Hong, Sehwan Kim, Hyungjun Joo, Hyeonggeun Han, Jiyoon Shin, Yoav Wald, Jungwoo Lee
{"title":"Bias Alleviation through Network Pruning for Sparse and Debiased Models.","authors":"Sangwoo Hong, Sehwan Kim, Hyungjun Joo, Hyeonggeun Han, Jiyoon Shin, Yoav Wald, Jungwoo Lee","doi":"10.1109/TIP.2026.3687070","DOIUrl":null,"url":null,"abstract":"<p><p>Pruning is a highly effective method for reducing the size of neural networks with negligible impact on their average performance. However, recent studies have revealed that pruning actually amplifies the bias in the models, leading to decreased performance for underrepresented groups. To address this issue, we first analyze the impact of pruning on the confidence of each sample and introduce Accumulated Confidence (AC). AC is a proxy that facilitates the identification of bias-conflicting and bias-aligned samples without relying on group annotations. We then propose a debiasing algorithm, which is called DEbiasing Network through Pruning (DENP). DENP utilizes AC to mitigate bias within the network. Even without bias information, DENP exhibits remarkable debiasing performance on varying levels of sparsity, effectively mitigating the bias-exacerbating property of pruning and resulting in both sparse and debiased neural networks. Moreover, even when compared with state-of-the-art debiasing baselines under identical conditions, the DENP still achieves the best performance on multiple benchmark datasets, demonstrating its superior debiasing capabilities.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7000,"publicationDate":"2026-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TIP.2026.3687070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Pruning is a highly effective method for reducing the size of neural networks with negligible impact on their average performance. However, recent studies have revealed that pruning actually amplifies the bias in the models, leading to decreased performance for underrepresented groups. To address this issue, we first analyze the impact of pruning on the confidence of each sample and introduce Accumulated Confidence (AC). AC is a proxy that facilitates the identification of bias-conflicting and bias-aligned samples without relying on group annotations. We then propose a debiasing algorithm, which is called DEbiasing Network through Pruning (DENP). DENP utilizes AC to mitigate bias within the network. Even without bias information, DENP exhibits remarkable debiasing performance on varying levels of sparsity, effectively mitigating the bias-exacerbating property of pruning and resulting in both sparse and debiased neural networks. Moreover, even when compared with state-of-the-art debiasing baselines under identical conditions, the DENP still achieves the best performance on multiple benchmark datasets, demonstrating its superior debiasing capabilities.

稀疏模型和去偏模型的网络剪枝减偏。
修剪是一种非常有效的减小神经网络规模的方法,对其平均性能的影响可以忽略不计。然而,最近的研究表明,修剪实际上放大了模型中的偏差,导致代表性不足的群体的表现下降。为了解决这个问题,我们首先分析修剪对每个样本置信度的影响,并引入累积置信度(AC)。AC是一个代理,它可以在不依赖于组注释的情况下方便地识别偏差冲突和偏差对齐的样本。然后,我们提出了一种去偏算法,称为debiasing Network through Pruning (DENP)。DENP利用交流来减轻网络中的偏置。即使没有偏倚信息,DENP在不同的稀疏度水平上也表现出显著的去偏性能,有效地减轻了剪枝的偏置加剧特性,从而产生了稀疏和去偏的神经网络。此外,即使在相同条件下与最先进的去偏基线进行比较,DENP在多个基准数据集上仍然取得了最佳性能,证明了其优越的去偏能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书