Yahui Liu;Jian Wang;Yuntai Yang;Renlong Wang;Simiao Wang
{"title":"带噪声标签的分层容噪元学习","authors":"Yahui Liu;Jian Wang;Yuntai Yang;Renlong Wang;Simiao Wang","doi":"10.1109/LSP.2024.3480033","DOIUrl":null,"url":null,"abstract":"Due to the detrimental impact of noisy labels on the generalization of deep neural networks, learning with noisy labels has become an important task in modern deep learning applications. Many previous efforts have mitigated this problem by either removing noisy samples or correcting labels. In this letter, we address this issue from a new perspective and empirically find that models trained with both clean and mislabeled samples exhibit distinguishable activation feature distributions. Building on this observation, we propose a novel meta-learning approach called the Hierarchical Noise-tolerant Meta-Learning (HNML) method, which involves a bi-level optimization comprising meta-training and meta-testing. In the meta-training stage, we incorporate consistency loss at the output prediction hierarchy to facilitate model adaptation to dynamically changing label noise. In the meta-testing stage, we extract activation feature distributions using class activation maps and propose a new mask-guided self-learning method to correct biases in the foreground regions. Through the bi-level optimization of HNML, we ensure that the model generates discriminative feature representations that are insensitive to noisy labels. When evaluated on both synthetic and real-world noisy datasets, our HNML method achieves significant improvements over previous state-of-the-art methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3020-3024"},"PeriodicalIF":3.2000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hierarchical Noise-Tolerant Meta-Learning With Noisy Labels\",\"authors\":\"Yahui Liu;Jian Wang;Yuntai Yang;Renlong Wang;Simiao Wang\",\"doi\":\"10.1109/LSP.2024.3480033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to the detrimental impact of noisy labels on the generalization of deep neural networks, learning with noisy labels has become an important task in modern deep learning applications. Many previous efforts have mitigated this problem by either removing noisy samples or correcting labels. In this letter, we address this issue from a new perspective and empirically find that models trained with both clean and mislabeled samples exhibit distinguishable activation feature distributions. Building on this observation, we propose a novel meta-learning approach called the Hierarchical Noise-tolerant Meta-Learning (HNML) method, which involves a bi-level optimization comprising meta-training and meta-testing. In the meta-training stage, we incorporate consistency loss at the output prediction hierarchy to facilitate model adaptation to dynamically changing label noise. In the meta-testing stage, we extract activation feature distributions using class activation maps and propose a new mask-guided self-learning method to correct biases in the foreground regions. Through the bi-level optimization of HNML, we ensure that the model generates discriminative feature representations that are insensitive to noisy labels. When evaluated on both synthetic and real-world noisy datasets, our HNML method achieves significant improvements over previous state-of-the-art methods.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"31 \",\"pages\":\"3020-3024\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10716287/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10716287/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Hierarchical Noise-Tolerant Meta-Learning With Noisy Labels
Due to the detrimental impact of noisy labels on the generalization of deep neural networks, learning with noisy labels has become an important task in modern deep learning applications. Many previous efforts have mitigated this problem by either removing noisy samples or correcting labels. In this letter, we address this issue from a new perspective and empirically find that models trained with both clean and mislabeled samples exhibit distinguishable activation feature distributions. Building on this observation, we propose a novel meta-learning approach called the Hierarchical Noise-tolerant Meta-Learning (HNML) method, which involves a bi-level optimization comprising meta-training and meta-testing. In the meta-training stage, we incorporate consistency loss at the output prediction hierarchy to facilitate model adaptation to dynamically changing label noise. In the meta-testing stage, we extract activation feature distributions using class activation maps and propose a new mask-guided self-learning method to correct biases in the foreground regions. Through the bi-level optimization of HNML, we ensure that the model generates discriminative feature representations that are insensitive to noisy labels. When evaluated on both synthetic and real-world noisy datasets, our HNML method achieves significant improvements over previous state-of-the-art methods.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.