{"title":"Foster noisy label learning by exploiting noise-induced distortion in foreground localization","authors":"Ang Chen , Feng Xu , Xin Lyu , Tao Zeng , Xin Li","doi":"10.1016/j.neunet.2025.107712","DOIUrl":null,"url":null,"abstract":"<div><div>Large-scale, well-annotated datasets are crucial for training deep neural networks. However, the prevalence of noisy-labeled samples can cause irreversible impairment to the generalization of models. Existing approaches have attempted to mitigate the impact of noisy labels by exploiting the different loss or confidence distributions between clean and noisy data to detect and correct noisy labels. This paper investigates the noise-induced distorting effect on foreground localization by tracking the model’s spatial attention distribution on visual activation maps. We observe that in clean samples, highly responsive regions usually focus on label-relevant foreground regions, whereas in noisy data, the model accidentally attends to uninformative background regions or cluttered object edges due to interference from label noise. Inspired by the observations, we propose a novel two-stage foreground localization-augmented noisy label learning framework named FLSC to concurrently boost the accuracy of sample selection and label correction for robust training. Specifically, FLSC first quantifies noise-induced distortion in foreground localization to foster conventional loss-based selection criteria by calculating the information reduction when deriving the foreground images from the original images based on the attention distribution. Next, we propose a noise-adaptive adversarial erasing strategy, which suppresses background activation by imposing adaptive erasure regularization, to eliminate overfitting to noisy samples while enhancing the learning of robust representations. To the best of our knowledge, it is the first attempt to exploit localization quality evaluation based on feature activation to address the label noise problem. Extensive experiments on synthetic and real-world datasets validate the superior performance of FLSC compared to state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107712"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005921","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large-scale, well-annotated datasets are crucial for training deep neural networks. However, the prevalence of noisy-labeled samples can cause irreversible impairment to the generalization of models. Existing approaches have attempted to mitigate the impact of noisy labels by exploiting the different loss or confidence distributions between clean and noisy data to detect and correct noisy labels. This paper investigates the noise-induced distorting effect on foreground localization by tracking the model’s spatial attention distribution on visual activation maps. We observe that in clean samples, highly responsive regions usually focus on label-relevant foreground regions, whereas in noisy data, the model accidentally attends to uninformative background regions or cluttered object edges due to interference from label noise. Inspired by the observations, we propose a novel two-stage foreground localization-augmented noisy label learning framework named FLSC to concurrently boost the accuracy of sample selection and label correction for robust training. Specifically, FLSC first quantifies noise-induced distortion in foreground localization to foster conventional loss-based selection criteria by calculating the information reduction when deriving the foreground images from the original images based on the attention distribution. Next, we propose a noise-adaptive adversarial erasing strategy, which suppresses background activation by imposing adaptive erasure regularization, to eliminate overfitting to noisy samples while enhancing the learning of robust representations. To the best of our knowledge, it is the first attempt to exploit localization quality evaluation based on feature activation to address the label noise problem. Extensive experiments on synthetic and real-world datasets validate the superior performance of FLSC compared to state-of-the-art methods.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.