基于噪声标签分类的多粒度集成样本选择与标签校正

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Kecan Cai , Hongyun Zhang , Witold Pedrycz , Duoqian Miao , Chaofan Chen
{"title":"基于噪声标签分类的多粒度集成样本选择与标签校正","authors":"Kecan Cai ,&nbsp;Hongyun Zhang ,&nbsp;Witold Pedrycz ,&nbsp;Duoqian Miao ,&nbsp;Chaofan Chen","doi":"10.1016/j.asoc.2025.113266","DOIUrl":null,"url":null,"abstract":"<div><div>Sample selection is crucial in classification tasks with noisy labels, yet most existing sample selection methods rely on a single criterion. These approaches often face challenges, including low purity of selected clean samples, and underfitting due to an insufficient number of selected clean training samples. To address these challenges, this paper proposes GNet-SSLC, a novel multi-granularity network framework that integrates multiple criteria ensemble sample selection (SS) and multiple views label correction (LC). In the SS phase, this paper proposes a metric learning-based dual k-Nearest Neighbor (k-NN) sample selection method. This method first uses corrected soft labels from the initial k-NN round to guide the selection of clean samples in the subsequent k-NN round. To further enhance selection accuracy, we combine this dual k-NN approach with a small loss sample selection technique through a voting mechanism. This multiple criteria ensemble method addresses the issues of low purity and instability inherent in single criterion approaches. In the LC phase, this paper designs a multiple views label correction framework that generates high-quality pseudo-labels for selected noisy samples. A key innovation of the framework is the design of a regularized contrastive learning loss, which optimizes the semi-supervised learning process by leveraging multiple views of training samples. The additional inclusion of training samples with high-quality pseudo-labels can effectively mitigate underfitting caused by a limited number of clean training samples. Experimental results on both synthetic and real-world noisy datasets indicate that GNet-SSLC enhances the purity and stability of the selected clean samples, and significantly improves classification performance. The enhancement is particularly notable with high noise rate dataset, such as CIFAR-100 dataset with 80% noise rate, achieving a 19.3% increase in classification accuracy compared to the baseline method.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"180 ","pages":"Article 113266"},"PeriodicalIF":7.2000,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-granularity ensemble sample selection and label correction for classification with noisy labels\",\"authors\":\"Kecan Cai ,&nbsp;Hongyun Zhang ,&nbsp;Witold Pedrycz ,&nbsp;Duoqian Miao ,&nbsp;Chaofan Chen\",\"doi\":\"10.1016/j.asoc.2025.113266\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Sample selection is crucial in classification tasks with noisy labels, yet most existing sample selection methods rely on a single criterion. These approaches often face challenges, including low purity of selected clean samples, and underfitting due to an insufficient number of selected clean training samples. To address these challenges, this paper proposes GNet-SSLC, a novel multi-granularity network framework that integrates multiple criteria ensemble sample selection (SS) and multiple views label correction (LC). In the SS phase, this paper proposes a metric learning-based dual k-Nearest Neighbor (k-NN) sample selection method. This method first uses corrected soft labels from the initial k-NN round to guide the selection of clean samples in the subsequent k-NN round. To further enhance selection accuracy, we combine this dual k-NN approach with a small loss sample selection technique through a voting mechanism. This multiple criteria ensemble method addresses the issues of low purity and instability inherent in single criterion approaches. In the LC phase, this paper designs a multiple views label correction framework that generates high-quality pseudo-labels for selected noisy samples. A key innovation of the framework is the design of a regularized contrastive learning loss, which optimizes the semi-supervised learning process by leveraging multiple views of training samples. The additional inclusion of training samples with high-quality pseudo-labels can effectively mitigate underfitting caused by a limited number of clean training samples. Experimental results on both synthetic and real-world noisy datasets indicate that GNet-SSLC enhances the purity and stability of the selected clean samples, and significantly improves classification performance. The enhancement is particularly notable with high noise rate dataset, such as CIFAR-100 dataset with 80% noise rate, achieving a 19.3% increase in classification accuracy compared to the baseline method.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"180 \",\"pages\":\"Article 113266\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625005770\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625005770","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

样本选择在有噪声标签的分类任务中是至关重要的,然而大多数现有的样本选择方法依赖于单一的标准。这些方法经常面临挑战,包括所选干净样本的纯度低,以及由于所选干净训练样本数量不足而导致的欠拟合。为了解决这些挑战,本文提出了GNet-SSLC,这是一种集成了多标准集成样本选择(SS)和多视图标签校正(LC)的新型多粒度网络框架。在SS阶段,本文提出了一种基于度量学习的双k近邻(k-NN)样本选择方法。该方法首先使用初始k-NN轮的修正软标签来指导后续k-NN轮中干净样本的选择。为了进一步提高选择精度,我们通过投票机制将这种双k-NN方法与小损失样本选择技术结合起来。这种多准则集成方法解决了单准则方法固有的低纯度和不稳定性问题。在LC阶段,本文设计了一个多视图标签校正框架,为选定的噪声样本生成高质量的伪标签。该框架的一个关键创新是正则化对比学习损失的设计,它通过利用训练样本的多个视图来优化半监督学习过程。额外包含具有高质量伪标签的训练样本可以有效地缓解由于干净训练样本数量有限而导致的欠拟合。在合成和真实噪声数据集上的实验结果表明,GNet-SSLC提高了所选干净样本的纯度和稳定性,显著提高了分类性能。对于高噪声率的数据集,例如噪声率为80%的CIFAR-100数据集,与基线方法相比,分类准确率提高了19.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-granularity ensemble sample selection and label correction for classification with noisy labels
Sample selection is crucial in classification tasks with noisy labels, yet most existing sample selection methods rely on a single criterion. These approaches often face challenges, including low purity of selected clean samples, and underfitting due to an insufficient number of selected clean training samples. To address these challenges, this paper proposes GNet-SSLC, a novel multi-granularity network framework that integrates multiple criteria ensemble sample selection (SS) and multiple views label correction (LC). In the SS phase, this paper proposes a metric learning-based dual k-Nearest Neighbor (k-NN) sample selection method. This method first uses corrected soft labels from the initial k-NN round to guide the selection of clean samples in the subsequent k-NN round. To further enhance selection accuracy, we combine this dual k-NN approach with a small loss sample selection technique through a voting mechanism. This multiple criteria ensemble method addresses the issues of low purity and instability inherent in single criterion approaches. In the LC phase, this paper designs a multiple views label correction framework that generates high-quality pseudo-labels for selected noisy samples. A key innovation of the framework is the design of a regularized contrastive learning loss, which optimizes the semi-supervised learning process by leveraging multiple views of training samples. The additional inclusion of training samples with high-quality pseudo-labels can effectively mitigate underfitting caused by a limited number of clean training samples. Experimental results on both synthetic and real-world noisy datasets indicate that GNet-SSLC enhances the purity and stability of the selected clean samples, and significantly improves classification performance. The enhancement is particularly notable with high noise rate dataset, such as CIFAR-100 dataset with 80% noise rate, achieving a 19.3% increase in classification accuracy compared to the baseline method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Soft Computing
Applied Soft Computing 工程技术-计算机:跨学科应用
CiteScore
15.80
自引率
6.90%
发文量
874
审稿时长
10.9 months
期刊介绍: Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities. Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信