{"title":"SFBM: Shared Feature Bias Mitigating for Long-Tailed Image Recognition.","authors":"Xinqiao Zhao,Mingjie Sun,Eng Gee Lim,Yao Zhao,Jimin Xiao","doi":"10.1109/tnnls.2025.3586215","DOIUrl":null,"url":null,"abstract":"Long-tailed distribution exists in real-world scenario and compromises the performance of recognition models. In this article, we point out that a neural network classifier has a shared feature bias, which tends to regard the shared features among different classes as head-class discriminative features, leading to misclassifications on tail-class samples under long-tailed scenarios. To solve this issue, we propose a shared feature bias mitigating (SFBM) framework. Specifically, we create two parallel classifiers trained concurrently with the baseline classifier, using our special training loss. The parallel classifier weight sums are then used for estimating the shared feature components in baseline classifier weights. Finally, we rectify the baseline classifier by removing the estimated shared feature components from it while supplementing the parallel classifier weights class by class to the rectified classifier weights, mitigating shared feature bias. Our proposed SFBM demonstrates broad compatibility with nearly all recognition methods while maintaining high computational efficiency, as it introduces no additional computation during inference. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that simply incorporating SFBM during the training phase consistently boosts the performance of various state-of-the-art methods by significant margins. The complete source code will be made publicly available at https://github.com/bzbz-bot/SFBM.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"167 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3586215","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Long-tailed distribution exists in real-world scenario and compromises the performance of recognition models. In this article, we point out that a neural network classifier has a shared feature bias, which tends to regard the shared features among different classes as head-class discriminative features, leading to misclassifications on tail-class samples under long-tailed scenarios. To solve this issue, we propose a shared feature bias mitigating (SFBM) framework. Specifically, we create two parallel classifiers trained concurrently with the baseline classifier, using our special training loss. The parallel classifier weight sums are then used for estimating the shared feature components in baseline classifier weights. Finally, we rectify the baseline classifier by removing the estimated shared feature components from it while supplementing the parallel classifier weights class by class to the rectified classifier weights, mitigating shared feature bias. Our proposed SFBM demonstrates broad compatibility with nearly all recognition methods while maintaining high computational efficiency, as it introduces no additional computation during inference. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that simply incorporating SFBM during the training phase consistently boosts the performance of various state-of-the-art methods by significant margins. The complete source code will be made publicly available at https://github.com/bzbz-bot/SFBM.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.