Kaiyang Liao, Junwen Pang, Yuanlin Zheng, Keer Wang, Guangfeng Lin, Yunfei Tan
{"title":"Multi-Expert Dynamic Gating and Feature Decoupling Algorithm for Long-Tail Image Classification","authors":"Kaiyang Liao, Junwen Pang, Yuanlin Zheng, Keer Wang, Guangfeng Lin, Yunfei Tan","doi":"10.1002/cpe.70287","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The long-tail distribution is characterized by a large number of samples in a few categories (head classes) and a scarcity of samples in most categories (tail classes). This inherent class imbalance significantly degrades the performance of conventional classification models, particularly on tail classes. To tackle this challenge, we propose a Multi-Expert Dynamic Gating and Feature Decoupling Classification Algorithm based on Uniform Enhanced Sampling. The proposed method integrates multi-expert learning with data augmentation and enhances tail classes performance by jointly optimizing the loss function and the expert assignment network. Specifically, a uniform enhanced sampling strategy is introduced to augment tail classes samples and increase their sampling frequency through resampling. During the feature learning stage, the shared layers of a convolutional network extract general features, while multiple expert models are trained independently. A feature decoupling technique is employed to separate generic and class-specific features. In addition, a binary gating mechanism is designed to dynamically assign experts while preventing over-reliance on specific categories. Extensive experiments on three benchmark long-tailed classification datasets:CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—demonstrate that our method consistently outperforms existing state-of-the-art approaches. Ablation studies further confirm the effectiveness of the uniform enhanced sampling strategy and the joint optimization of multi-expert learning, showing that our algorithm successfully balances the model's attention across head and tail classes, thereby improving overall classification performance.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70287","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The long-tail distribution is characterized by a large number of samples in a few categories (head classes) and a scarcity of samples in most categories (tail classes). This inherent class imbalance significantly degrades the performance of conventional classification models, particularly on tail classes. To tackle this challenge, we propose a Multi-Expert Dynamic Gating and Feature Decoupling Classification Algorithm based on Uniform Enhanced Sampling. The proposed method integrates multi-expert learning with data augmentation and enhances tail classes performance by jointly optimizing the loss function and the expert assignment network. Specifically, a uniform enhanced sampling strategy is introduced to augment tail classes samples and increase their sampling frequency through resampling. During the feature learning stage, the shared layers of a convolutional network extract general features, while multiple expert models are trained independently. A feature decoupling technique is employed to separate generic and class-specific features. In addition, a binary gating mechanism is designed to dynamically assign experts while preventing over-reliance on specific categories. Extensive experiments on three benchmark long-tailed classification datasets:CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—demonstrate that our method consistently outperforms existing state-of-the-art approaches. Ablation studies further confirm the effectiveness of the uniform enhanced sampling strategy and the joint optimization of multi-expert learning, showing that our algorithm successfully balances the model's attention across head and tail classes, thereby improving overall classification performance.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.