{"title":"Boosting Few-shot Learning by Self-calibration in Feature Space","authors":"Kai Zheng, Liu Cheng, Jiehong Shen","doi":"10.1145/3552487.3556437","DOIUrl":null,"url":null,"abstract":"Few-shot learning aims at adapting models to a novel task with extremely few labeled samples. Fine-tuning the models pre-trained on a base dataset has been recently demonstrated to be an effective approach. However, a dilemma emerges as whether to modify the parameters of the feature extractor. This is because tuning a vast number of parameters based on only a handful of samples tends to induce overfitting, while fixing the parameters leads to inherent bias in the extracted features since the novel classes are unseen for the pre-trained feature extractor. To alleviate this issue, we novelly reformulate fine-tuning as calibrating the biased features of novel samples conditioned on a fixed feature extractor through an auxiliary network. Technically, a self-calibration framework is proposed to construct improved image-level features by progressively performing local alignment based on a self-supervised Transformer. Extensive experiments demonstrate that the proposed method vastly outperforms the state-of-the-art methods.","PeriodicalId":274055,"journal":{"name":"Proceedings of the 1st International Workshop on Methodologies for Multimedia","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Methodologies for Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3552487.3556437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Few-shot learning aims at adapting models to a novel task with extremely few labeled samples. Fine-tuning the models pre-trained on a base dataset has been recently demonstrated to be an effective approach. However, a dilemma emerges as whether to modify the parameters of the feature extractor. This is because tuning a vast number of parameters based on only a handful of samples tends to induce overfitting, while fixing the parameters leads to inherent bias in the extracted features since the novel classes are unseen for the pre-trained feature extractor. To alleviate this issue, we novelly reformulate fine-tuning as calibrating the biased features of novel samples conditioned on a fixed feature extractor through an auxiliary network. Technically, a self-calibration framework is proposed to construct improved image-level features by progressively performing local alignment based on a self-supervised Transformer. Extensive experiments demonstrate that the proposed method vastly outperforms the state-of-the-art methods.
few -shot学习旨在使模型适应具有极少标记样本的新任务。对在基础数据集上预训练的模型进行微调最近被证明是一种有效的方法。然而,是否修改特征提取器的参数是一个难题。这是因为仅基于少数样本调整大量参数往往会导致过拟合,而固定参数会导致提取特征中的固有偏差,因为预训练的特征提取器看不到新类。为了缓解这个问题,我们新颖地将微调定义为通过辅助网络校准固定特征提取器条件下的新样本的偏置特征。在技术上,提出了一种基于自监督变压器的自校准框架,通过逐步进行局部校准来构建改进的图像级特征。大量的实验表明,所提出的方法大大优于最先进的方法。