Yuan Wang , Huaxin Pang , Ying Qin , Shikui Wei , Yao Zhao
{"title":"Adaptive estimation of instance-dependent noise transition matrix for learning with instance-dependent label noise","authors":"Yuan Wang , Huaxin Pang , Ying Qin , Shikui Wei , Yao Zhao","doi":"10.1016/j.neunet.2025.107464","DOIUrl":null,"url":null,"abstract":"<div><div>Instance-dependent noise (IDN) widely exists in real-world datasets, seriously hindering the effective application of deep neural networks. In contrast to class-dependent noise, IDN is influenced not solely by the class but also by the intrinsic features of the instance. Current methods for addressing IDN typically involve estimating the instance-dependent noise transition matrix (IDNT). However, these approaches often either assume a specific form for the IDNT or rely on anchor points, which can result in significant estimation errors. To tackle this issue, we propose a method that makes no assumptions about the form of IDNT or the need for anchor points. Specifically, by computing similarity scores between each instance’s feature representation and the label representations, the instance-label confusion matrix (ILCM) captures the relationships between global instance features and different categories, providing valuable insights into the degree of noise. We then adaptively combine the noisy class posteriors from network predictions (for noisy data) or given labels (for clean data) with the ILCM, weighted by the degree of noise, to enhance the accuracy of IDNT estimation. Finally, the obtained IDNT adjusts the loss function, resulting in a more robust classifier. Comprehensive experiments comparing our method with state-of-the-art approaches on synthetic IDN datasets (F-MNIST, SVHN, CIFAR-10, CIFAR-100) and a real-world noisy dataset (Clothing1M) demonstrate the superiority and effectiveness of our approach.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107464"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003430","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Instance-dependent noise (IDN) widely exists in real-world datasets, seriously hindering the effective application of deep neural networks. In contrast to class-dependent noise, IDN is influenced not solely by the class but also by the intrinsic features of the instance. Current methods for addressing IDN typically involve estimating the instance-dependent noise transition matrix (IDNT). However, these approaches often either assume a specific form for the IDNT or rely on anchor points, which can result in significant estimation errors. To tackle this issue, we propose a method that makes no assumptions about the form of IDNT or the need for anchor points. Specifically, by computing similarity scores between each instance’s feature representation and the label representations, the instance-label confusion matrix (ILCM) captures the relationships between global instance features and different categories, providing valuable insights into the degree of noise. We then adaptively combine the noisy class posteriors from network predictions (for noisy data) or given labels (for clean data) with the ILCM, weighted by the degree of noise, to enhance the accuracy of IDNT estimation. Finally, the obtained IDNT adjusts the loss function, resulting in a more robust classifier. Comprehensive experiments comparing our method with state-of-the-art approaches on synthetic IDN datasets (F-MNIST, SVHN, CIFAR-10, CIFAR-100) and a real-world noisy dataset (Clothing1M) demonstrate the superiority and effectiveness of our approach.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.