{"title":"Similar concept matters: Prototype analogy for few-shot classification","authors":"Tong Ouyang, Bo Ma, Hao Xu","doi":"10.1016/j.neucom.2025.130648","DOIUrl":null,"url":null,"abstract":"<div><div>Few-shot classification aims to produce a classifier to recognize novel classes not seen during training with few labeled samples. The unseen classes and scarce samples make few-shot classification truly challenging. Transfer learning has been demonstrated to be an efficient paradigm for few-shot classification problem in recent literature. However, most methods based on transfer learning only utilize the parameters of the pre-trained model, ignoring the base features themselves which can be seen as the semantic concepts learned by the pre-trained model. In this paper, our main innovation lies in highlighting the importance of the learned semantic concepts of base classes. We propose a simple yet effective approach for few-shot classification to explore the pre-trained semantic features of base classes. Our approach innovatively employs prototype analogy inside and outside the few-shot classification task, to perform clustering and to select base features respectively, in an alternate and iterative way. We further design the best arrangement for these two steps. The initial centroids for clustering are constantly optimized by more and more accurate base features which are selected by the clustered novel prototypes from the previous iteration. When the iteration converges, the best semantic base features are selected to complete the prototypes of novel classes. Extensive experiments on four standard datasets and two deep backbones are conducted to demonstrate the effectiveness of our proposed prototype analogy method. Notably, our method requires neither sophisticated transductive algorithm nor additional learnable parameters besides the pre-trained model yet achieving comparable or even state-of-the-art performance on the miniImageNet, tieredImageNet and CIFAR-FS datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130648"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013207","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Few-shot classification aims to produce a classifier to recognize novel classes not seen during training with few labeled samples. The unseen classes and scarce samples make few-shot classification truly challenging. Transfer learning has been demonstrated to be an efficient paradigm for few-shot classification problem in recent literature. However, most methods based on transfer learning only utilize the parameters of the pre-trained model, ignoring the base features themselves which can be seen as the semantic concepts learned by the pre-trained model. In this paper, our main innovation lies in highlighting the importance of the learned semantic concepts of base classes. We propose a simple yet effective approach for few-shot classification to explore the pre-trained semantic features of base classes. Our approach innovatively employs prototype analogy inside and outside the few-shot classification task, to perform clustering and to select base features respectively, in an alternate and iterative way. We further design the best arrangement for these two steps. The initial centroids for clustering are constantly optimized by more and more accurate base features which are selected by the clustered novel prototypes from the previous iteration. When the iteration converges, the best semantic base features are selected to complete the prototypes of novel classes. Extensive experiments on four standard datasets and two deep backbones are conducted to demonstrate the effectiveness of our proposed prototype analogy method. Notably, our method requires neither sophisticated transductive algorithm nor additional learnable parameters besides the pre-trained model yet achieving comparable or even state-of-the-art performance on the miniImageNet, tieredImageNet and CIFAR-FS datasets.
few -shot分类的目的是产生一个分类器来识别在训练中没有看到的新类,并且标记的样本很少。看不见的类和稀缺的样本使少射分类真正具有挑战性。近年来,迁移学习已被证明是一种有效的小样本分类方法。然而,大多数基于迁移学习的方法只利用了预训练模型的参数,而忽略了基本特征本身,这些基本特征可以看作是预训练模型学习到的语义概念。在本文中,我们的主要创新在于强调了基类学习的语义概念的重要性。我们提出了一种简单而有效的少射分类方法来探索基类的预训练语义特征。我们的方法创新性地在少镜头分类任务内外采用原型类比,以交替迭代的方式分别进行聚类和基特征选择。我们进一步设计了这两个步骤的最佳安排。聚类后的新原型在前一次迭代中选择越来越精确的基特征,不断优化聚类的初始质心。当迭代收敛时,选择最佳的语义基特征来完成新类的原型。在四个标准数据集和两个深度主干上进行了大量实验,验证了我们提出的原型类比方法的有效性。值得注意的是,我们的方法既不需要复杂的转换算法,也不需要除了预训练模型之外的额外可学习参数,但在miniImageNet、tieredImageNet和CIFAR-FS数据集上实现了相当甚至最先进的性能。
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.