Lin Li , Shengda Zhuo , Hongguang Lin , Jinchun He , Wangjie Qiu , Qinnan Zhang , Changdong Wang , Shuqiang Huang
{"title":"Enhancing partition distinction: A contrastive policy to recommendation unlearning","authors":"Lin Li , Shengda Zhuo , Hongguang Lin , Jinchun He , Wangjie Qiu , Qinnan Zhang , Changdong Wang , Shuqiang Huang","doi":"10.1016/j.neunet.2025.107667","DOIUrl":null,"url":null,"abstract":"<div><div>With the growing privacy and data contamination concerns in recommendation systems, recommendation unlearning, <em>i.e.</em>, unlearning the impact of specific learned data, has garnered more attention. Unfortunately, existing research primarily focuses on the complete unlearning of target data, neglecting the balance between unlearning integrity, practicality, and efficiency. Two major restrictions hinder the widespread application of this unlearning paradigm in practice. First, while prior studies often assume consistent similarity among samples, they overly emphasize the local collaborative relationships between samples and central nodes, leading to an imbalance between local and global collaborative information. Second, while data partition appears to be a default setup, this evidently exacerbates the sparsity of recommendation data, which can have a potentially negative impact on recommendation quality. To fill these gaps, this paper proposes a data partitioning and submodel training strategy, named <em>Partition Distinction with Contrastive Recommendation Unlearning</em> (PDCRU), which aims to balance data partitioning and feature sparsity. The key idea is to extract structural features as global collaborative information for samples and introduce structural feature constraints based on sample similarity during the partitioning process. For submodel training, we leverage contrastive learning to introduce additional high-quality training signals to enhance model embeddings. Extensive experiments validate the feasibility and consistent superiority of our method over existing recommendation unlearning models in learning and unlearning. Specifically, our model achieves a 4.83% improvement in performance and a 4.64x enhancement in unlearning efficiency compared to baseline methods. The code is released at <span><span>https://github.com/linli0818/PDCRU</span><svg><path></path></svg></span></div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107667"},"PeriodicalIF":6.3000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005477","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the growing privacy and data contamination concerns in recommendation systems, recommendation unlearning, i.e., unlearning the impact of specific learned data, has garnered more attention. Unfortunately, existing research primarily focuses on the complete unlearning of target data, neglecting the balance between unlearning integrity, practicality, and efficiency. Two major restrictions hinder the widespread application of this unlearning paradigm in practice. First, while prior studies often assume consistent similarity among samples, they overly emphasize the local collaborative relationships between samples and central nodes, leading to an imbalance between local and global collaborative information. Second, while data partition appears to be a default setup, this evidently exacerbates the sparsity of recommendation data, which can have a potentially negative impact on recommendation quality. To fill these gaps, this paper proposes a data partitioning and submodel training strategy, named Partition Distinction with Contrastive Recommendation Unlearning (PDCRU), which aims to balance data partitioning and feature sparsity. The key idea is to extract structural features as global collaborative information for samples and introduce structural feature constraints based on sample similarity during the partitioning process. For submodel training, we leverage contrastive learning to introduce additional high-quality training signals to enhance model embeddings. Extensive experiments validate the feasibility and consistent superiority of our method over existing recommendation unlearning models in learning and unlearning. Specifically, our model achieves a 4.83% improvement in performance and a 4.64x enhancement in unlearning efficiency compared to baseline methods. The code is released at https://github.com/linli0818/PDCRU
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.