Multi-Expert Dynamic Gating and Feature Decoupling Algorithm for Long-Tail Image Classification

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Kaiyang Liao, Junwen Pang, Yuanlin Zheng, Keer Wang, Guangfeng Lin, Yunfei Tan
{"title":"Multi-Expert Dynamic Gating and Feature Decoupling Algorithm for Long-Tail Image Classification","authors":"Kaiyang Liao,&nbsp;Junwen Pang,&nbsp;Yuanlin Zheng,&nbsp;Keer Wang,&nbsp;Guangfeng Lin,&nbsp;Yunfei Tan","doi":"10.1002/cpe.70287","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The long-tail distribution is characterized by a large number of samples in a few categories (head classes) and a scarcity of samples in most categories (tail classes). This inherent class imbalance significantly degrades the performance of conventional classification models, particularly on tail classes. To tackle this challenge, we propose a Multi-Expert Dynamic Gating and Feature Decoupling Classification Algorithm based on Uniform Enhanced Sampling. The proposed method integrates multi-expert learning with data augmentation and enhances tail classes performance by jointly optimizing the loss function and the expert assignment network. Specifically, a uniform enhanced sampling strategy is introduced to augment tail classes samples and increase their sampling frequency through resampling. During the feature learning stage, the shared layers of a convolutional network extract general features, while multiple expert models are trained independently. A feature decoupling technique is employed to separate generic and class-specific features. In addition, a binary gating mechanism is designed to dynamically assign experts while preventing over-reliance on specific categories. Extensive experiments on three benchmark long-tailed classification datasets:CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—demonstrate that our method consistently outperforms existing state-of-the-art approaches. Ablation studies further confirm the effectiveness of the uniform enhanced sampling strategy and the joint optimization of multi-expert learning, showing that our algorithm successfully balances the model's attention across head and tail classes, thereby improving overall classification performance.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70287","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

The long-tail distribution is characterized by a large number of samples in a few categories (head classes) and a scarcity of samples in most categories (tail classes). This inherent class imbalance significantly degrades the performance of conventional classification models, particularly on tail classes. To tackle this challenge, we propose a Multi-Expert Dynamic Gating and Feature Decoupling Classification Algorithm based on Uniform Enhanced Sampling. The proposed method integrates multi-expert learning with data augmentation and enhances tail classes performance by jointly optimizing the loss function and the expert assignment network. Specifically, a uniform enhanced sampling strategy is introduced to augment tail classes samples and increase their sampling frequency through resampling. During the feature learning stage, the shared layers of a convolutional network extract general features, while multiple expert models are trained independently. A feature decoupling technique is employed to separate generic and class-specific features. In addition, a binary gating mechanism is designed to dynamically assign experts while preventing over-reliance on specific categories. Extensive experiments on three benchmark long-tailed classification datasets:CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—demonstrate that our method consistently outperforms existing state-of-the-art approaches. Ablation studies further confirm the effectiveness of the uniform enhanced sampling strategy and the joint optimization of multi-expert learning, showing that our algorithm successfully balances the model's attention across head and tail classes, thereby improving overall classification performance.

长尾图像分类的多专家动态门控与特征解耦算法
长尾分布的特点是少数类别(头类)的样本数量多,而大多数类别(尾类)的样本数量少。这种固有的类不平衡显著降低了传统分类模型的性能,特别是在尾部类上。为了解决这一问题,我们提出了一种基于均匀增强采样的多专家动态门控和特征解耦分类算法。该方法将多专家学习与数据增强相结合,通过联合优化损失函数和专家分配网络来提高尾类性能。具体来说,引入了均匀增强采样策略,通过重采样来增加尾类样本并提高其采样频率。在特征学习阶段,卷积网络的共享层提取一般特征,而多个专家模型是独立训练的。采用特征解耦技术分离通用特征和特定于类的特征。此外,设计了一个二元门控机制来动态分配专家,同时防止对特定类别的过度依赖。在三个基准长尾分类数据集(CIFAR10-LT、CIFAR100-LT和imagenet - lt)上进行的大量实验表明,我们的方法始终优于现有的最先进的方法。消融研究进一步证实了均匀增强采样策略和多专家学习联合优化的有效性,表明我们的算法成功地平衡了模型在头类和尾类之间的注意力,从而提高了整体分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信