Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning

Xingping Dong;Tianran Ouyang;Shengcai Liao;Bo Du;Ling Shao
{"title":"Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning","authors":"Xingping Dong;Tianran Ouyang;Shengcai Liao;Bo Du;Ling Shao","doi":"10.1109/TIP.2024.3461472","DOIUrl":null,"url":null,"abstract":"Most existing few-shot learning (FSL) methods require a large amount of labeled data in meta-training, which is a major limit. To reduce the requirement of labels, a semi-supervised meta-training (SSMT) setting has been proposed for FSL, which includes only a few labeled samples and numbers of unlabeled samples in base classes. However, existing methods under this setting require class-aware sample selection from the unlabeled set, which violates the assumption of unlabeled set. In this paper, we propose a practical semi-supervised meta-training setting with truly unlabeled data to facilitate the applications of FSL in realistic scenarios. To better utilize both the labeled and truly unlabeled data, we propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML). Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data. Then we build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression to better learn the FSL model from noise labels. Surprisingly, through extensive experiments across two FSL datasets, we find that this simple meta-training framework effectively prevents the performance degradation of various FSL models under limited labeled data, and also significantly outperforms the representative SSMT models. Besides, benefiting from meta-training, our method also improves several representative SSL algorithms as well. We provide the training code and usage examples at \n<uri>https://github.com/ouyangtianran/PLML</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5663-5675"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10685036/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Most existing few-shot learning (FSL) methods require a large amount of labeled data in meta-training, which is a major limit. To reduce the requirement of labels, a semi-supervised meta-training (SSMT) setting has been proposed for FSL, which includes only a few labeled samples and numbers of unlabeled samples in base classes. However, existing methods under this setting require class-aware sample selection from the unlabeled set, which violates the assumption of unlabeled set. In this paper, we propose a practical semi-supervised meta-training setting with truly unlabeled data to facilitate the applications of FSL in realistic scenarios. To better utilize both the labeled and truly unlabeled data, we propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML). Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data. Then we build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression to better learn the FSL model from noise labels. Surprisingly, through extensive experiments across two FSL datasets, we find that this simple meta-training framework effectively prevents the performance degradation of various FSL models under limited labeled data, and also significantly outperforms the representative SSMT models. Besides, benefiting from meta-training, our method also improves several representative SSL algorithms as well. We provide the training code and usage examples at https://github.com/ouyangtianran/PLML .
基于伪标记的实用半监督元训练--用于少量学习
大多数现有的少量学习(FSL)方法在元训练中都需要大量标签数据,这是一个很大的限制。为了降低对标签的要求,有人为 FSL 提出了一种半监督元训练(SSMT)设置,其中只包括少量标签样本和基类中的大量未标签样本。然而,现有方法需要从未标明集合中选择类感知样本,这违反了未标明集合的假设。在本文中,我们提出了一种使用真正无标注数据的实用半监督元训练设置,以促进 FSL 在现实场景中的应用。为了更好地利用标注数据和真实无标注数据,我们提出了一种简单有效的元训练框架,即基于伪标注的元学习(PLML)。首先,我们通过普通的半监督学习(SSL)训练分类器,并利用它获得未标记数据的伪标签。然后,我们从标记数据和伪标记数据中建立少量任务,并设计出一种具有特征平滑和噪声抑制功能的新型微调方法,以便更好地从噪声标签中学习 FSL 模型。令人惊讶的是,通过在两个 FSL 数据集上的广泛实验,我们发现这种简单的元训练框架能有效防止各种 FSL 模型在有限标签数据下的性能下降,并显著优于具有代表性的 SSMT 模型。此外,受益于元训练,我们的方法还改进了几种具有代表性的 SSL 算法。我们在 https://github.com/ouyangtianran/PLML 网站上提供了训练代码和使用示例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信