Zizheng Yan;Delian Ruan;Yushuang Wu;Junshi Huang;Zhenhua Chai;Xiaoguang Han;Shuguang Cui;Guanbin Li
{"title":"Contrastive Open-Set Active Learning-Based Sample Selection for Image Classification","authors":"Zizheng Yan;Delian Ruan;Yushuang Wu;Junshi Huang;Zhenhua Chai;Xiaoguang Han;Shuguang Cui;Guanbin Li","doi":"10.1109/TIP.2024.3451928","DOIUrl":null,"url":null,"abstract":"In this paper, we address a complex but practical scenario in Active Learning (AL) known as open-set AL, where the unlabeled data consists of both in-distribution (ID) and out-of-distribution (OOD) samples. Standard AL methods will fail in this scenario as OOD samples are highly likely to be regarded as uncertain samples, leading to their selection and wasting of the budget. Existing methods focus on selecting the highly likely ID samples, which tend to be easy and less informative. To this end, we introduce two criteria, namely contrastive confidence and historical divergence, which measure the possibility of being ID and the hardness of a sample, respectively. By balancing the two proposed criteria, highly informative ID samples can be selected as much as possible. Furthermore, unlike previous methods that require additional neural networks to detect the OOD samples, we propose a contrastive clustering framework that endows the classifier with the ability to identify the OOD samples and further enhances the network’s representation learning. The experimental results demonstrate that the proposed method achieves state-of-the-art performance on several benchmark datasets.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5525-5537"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10667005/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we address a complex but practical scenario in Active Learning (AL) known as open-set AL, where the unlabeled data consists of both in-distribution (ID) and out-of-distribution (OOD) samples. Standard AL methods will fail in this scenario as OOD samples are highly likely to be regarded as uncertain samples, leading to their selection and wasting of the budget. Existing methods focus on selecting the highly likely ID samples, which tend to be easy and less informative. To this end, we introduce two criteria, namely contrastive confidence and historical divergence, which measure the possibility of being ID and the hardness of a sample, respectively. By balancing the two proposed criteria, highly informative ID samples can be selected as much as possible. Furthermore, unlike previous methods that require additional neural networks to detect the OOD samples, we propose a contrastive clustering framework that endows the classifier with the ability to identify the OOD samples and further enhances the network’s representation learning. The experimental results demonstrate that the proposed method achieves state-of-the-art performance on several benchmark datasets.
在本文中,我们讨论了主动学习(AL)中一个复杂但实用的场景,即开放集 AL,其中未标记数据由分布内(ID)和分布外(OOD)样本组成。标准 AL 方法在这种情况下会失效,因为 OOD 样本极有可能被视为不确定样本,从而被选中并浪费预算。现有方法的重点是选择极有可能被视为不确定的 ID 样本,而这些样本往往比较简单且信息量较少。为此,我们引入了两个标准,即对比置信度和历史分歧,分别衡量 ID 的可能性和样本的硬度。通过平衡所提出的两个标准,可以尽可能选择信息量大的 ID 样本。此外,与以往需要额外的神经网络来检测 OOD 样本的方法不同,我们提出了一个对比聚类框架,赋予分类器识别 OOD 样本的能力,并进一步增强网络的表征学习。实验结果表明,所提出的方法在多个基准数据集上取得了最先进的性能。