Recursive Confidence Training for Pseudo-Labeling Calibration in Semi-Supervised Few-Shot Learning

Kunlei Jing;Hebo Ma;Chen Zhang;Lei Wen;Zhaorui Zhang
{"title":"Recursive Confidence Training for Pseudo-Labeling Calibration in Semi-Supervised Few-Shot Learning","authors":"Kunlei Jing;Hebo Ma;Chen Zhang;Lei Wen;Zhaorui Zhang","doi":"10.1109/TIP.2025.3569196","DOIUrl":null,"url":null,"abstract":"Semi-Supervised Few-Shot Learning (SSFSL) aims to address the data scarcity in few-shot learning by leveraging both a few labeled support data and abundant unlabeled data. In SSFSL, a classifier trained on scarce support data is often biased and thus assigns inaccurate pseudo-labels to the unlabeled data, which will mislead downstream learning tasks. To combat this issue, we introduce a novel method called Certainty-Aware Recursive Confidence Training (CARCT). CARCT hinges on the insight that selecting pseudo-labeled data based on confidence levels can yield more informative support data, which is crucial for retraining an unbiased classifier to achieve accurate pseudo-labeling—a process we term pseudo-labeling calibration. We observe that accurate pseudo-labels typically exhibit smaller certainty entropy, indicating high-confidence pseudo-labeling compared to those of inaccurate pseudo-labels. Accordingly, CARCT constructs a joint double-Gaussian model to fit the certainty entropies collected across numerous SSFSL tasks. Thereby, A semi-supervised Prior Confidence Distribution (ssPCD) is learned to aid in distinguishing between high-confidence and low-confidence pseudo-labels. During an SSFSL task, ssPCD guides the selection of both high-confidence and low-confidence pseudo-labeled data to retrain the classifier that then assigns more accurate pseudo-labels to the low-confidence pseudo-labeled data. Such recursive confidence training continues until the low-confidence ones are exhausted, terminating the pseudo-labeling calibration. The unlabeled data all receive accurate pseudo-labels to expand the few support data to generalize the downstream learning task, which in return meta-refines the classifier, named self-training, to boost the pseudo-labeling in subsequent tasks. Extensive experiments on basic and extended SSFSL setups showcase the superiority of CARCT versus state-of-the-art methods, and comprehensive ablation studies and visualizations justify our insight. The source code is available at <uri>https://github.com/Klein-JING/CARCT</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3194-3208"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11006398/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Semi-Supervised Few-Shot Learning (SSFSL) aims to address the data scarcity in few-shot learning by leveraging both a few labeled support data and abundant unlabeled data. In SSFSL, a classifier trained on scarce support data is often biased and thus assigns inaccurate pseudo-labels to the unlabeled data, which will mislead downstream learning tasks. To combat this issue, we introduce a novel method called Certainty-Aware Recursive Confidence Training (CARCT). CARCT hinges on the insight that selecting pseudo-labeled data based on confidence levels can yield more informative support data, which is crucial for retraining an unbiased classifier to achieve accurate pseudo-labeling—a process we term pseudo-labeling calibration. We observe that accurate pseudo-labels typically exhibit smaller certainty entropy, indicating high-confidence pseudo-labeling compared to those of inaccurate pseudo-labels. Accordingly, CARCT constructs a joint double-Gaussian model to fit the certainty entropies collected across numerous SSFSL tasks. Thereby, A semi-supervised Prior Confidence Distribution (ssPCD) is learned to aid in distinguishing between high-confidence and low-confidence pseudo-labels. During an SSFSL task, ssPCD guides the selection of both high-confidence and low-confidence pseudo-labeled data to retrain the classifier that then assigns more accurate pseudo-labels to the low-confidence pseudo-labeled data. Such recursive confidence training continues until the low-confidence ones are exhausted, terminating the pseudo-labeling calibration. The unlabeled data all receive accurate pseudo-labels to expand the few support data to generalize the downstream learning task, which in return meta-refines the classifier, named self-training, to boost the pseudo-labeling in subsequent tasks. Extensive experiments on basic and extended SSFSL setups showcase the superiority of CARCT versus state-of-the-art methods, and comprehensive ablation studies and visualizations justify our insight. The source code is available at https://github.com/Klein-JING/CARCT
半监督少镜头学习中伪标记标定的递归置信度训练
半监督少次学习(SSFSL)旨在利用少量标记支持数据和大量未标记数据来解决少次学习中的数据稀缺性问题。在SSFSL中,基于稀缺支持数据训练的分类器往往存在偏差,从而为未标记的数据分配不准确的伪标签,这将误导下游的学习任务。为了解决这个问题,我们引入了一种新的方法,称为确定性感知递归置信训练(CARCT)。CARCT依赖于基于置信水平选择伪标记数据可以产生更多信息的支持数据的洞察力,这对于重新训练无偏分类器以实现准确的伪标记至关重要-我们称之为伪标记校准的过程。我们观察到,准确的伪标签通常表现出较小的确定性熵,表明伪标签的置信度比不准确的伪标签高。因此,CARCT构建了一个联合双高斯模型来拟合从多个SSFSL任务中收集的确定性熵。因此,学习了一个半监督先验置信分布(ssPCD)来帮助区分高置信度和低置信度伪标签。在SSFSL任务期间,ssPCD指导高置信度和低置信度伪标签数据的选择,以重新训练分类器,然后为低置信度伪标签数据分配更准确的伪标签。这种递归置信度训练一直持续到低置信度训练耗尽,伪标记校准终止。未标记的数据都得到准确的伪标签,以扩展少量的支持数据来泛化下游学习任务,下游学习任务反过来对分类器进行元细化,称为自训练,以增强后续任务中的伪标签。基础和扩展SSFSL设置的大量实验显示了CARCT相对于最先进方法的优越性,全面的消融研究和可视化证明了我们的见解。源代码可从https://github.com/Klein-JING/CARCT获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信