Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang
{"title":"Towards Natural Machine Unlearning.","authors":"Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang","doi":"10.1109/TPAMI.2025.3597350","DOIUrl":null,"url":null,"abstract":"<p><p>Machine unlearning (MU) aims to eliminate information that has been learned from specific training data, namely forgetting data, from a pretrained model. Currently, the mainstream of relabeling-based MU methods involves modifying the forgetting data with incorrect labels and subsequently fine-tuning the model. While learning such incorrect information can indeed remove knowledge, the process is quite unnatural as the unlearning process undesirably reinforces the incorrect information and leads to over-forgetting. Towards more natural machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels. Through pairing these adjusted samples with their labels, the model tends to use the injected correct information and naturally suppress the information meant to be forgotten. Albeit straightforward, such a first step towards natural machine unlearning can significantly outperform current state-of-the-art approaches. In particular, our method substantially reduces the over-forgetting problem and leads to strong robustness across different unlearning tasks, making it a promising candidate for practical machine unlearning.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2025.3597350","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine unlearning (MU) aims to eliminate information that has been learned from specific training data, namely forgetting data, from a pretrained model. Currently, the mainstream of relabeling-based MU methods involves modifying the forgetting data with incorrect labels and subsequently fine-tuning the model. While learning such incorrect information can indeed remove knowledge, the process is quite unnatural as the unlearning process undesirably reinforces the incorrect information and leads to over-forgetting. Towards more natural machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels. Through pairing these adjusted samples with their labels, the model tends to use the injected correct information and naturally suppress the information meant to be forgotten. Albeit straightforward, such a first step towards natural machine unlearning can significantly outperform current state-of-the-art approaches. In particular, our method substantially reduces the over-forgetting problem and leads to strong robustness across different unlearning tasks, making it a promising candidate for practical machine unlearning.