Ran Yang , Yihao Zhang , Kaibei Li , Qinyang He , Xiaokang Li , Wei Zhou
{"title":"公平推荐的对抗正则化扩散模型","authors":"Ran Yang , Yihao Zhang , Kaibei Li , Qinyang He , Xiaokang Li , Wei Zhou","doi":"10.1016/j.neunet.2025.107695","DOIUrl":null,"url":null,"abstract":"<div><div>With the widespread deployment of recommendation systems, concerns have grown over algorithmic fairness and representation bias in recommendation outcomes. Existing debiasing methods primarily suffer from two critical limitations: (1) Explicit feature removal strategies risk eliminating semantic signals entangled with sensitive attributes, inevitably degrading recommendation performance. (2) Conventional adversarial learning frameworks impose rigid gradient reversal to enforce independence from sensitive attributes, yet cause semantic distortion in latent representations through uncontrolled adversarial conflicts between fairness objectives and recommendation goals.</div><div>To address these challenges, we propose a fairness-aware recommendation framework leveraging the dynamic equilibrium of diffusion model. During the forward diffusion process, we introduce adaptive gradient-aware noise injection, where fairness discriminators from the reverse denoising process guide Gaussian perturbations through their aggregated gradient statistics, achieving feature-aware bias dissociation while preserving user interest semantics. The reverse denoising process employs adversarial regularization with sensitivity-aware gradient constraints, iteratively purifying recommendation-oriented embeddings through alternating optimization of denoising prediction and fairness discrimination tasks. To further enhance fairness-utility tradeoffs, we design an interest fusion mechanism at denoising initialization and develop a bias-controlled rounding function for candidate generation. Extensive experiments on three real-world datasets with sensitive attributes demonstrate that our model outperforms state-of-the-art methods in recommendation accuracy and fairness. We publish the source code at <span><span>https://github.com/YangRan993/DiffuFair</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107695"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial regularized diffusion model for fair recommendations\",\"authors\":\"Ran Yang , Yihao Zhang , Kaibei Li , Qinyang He , Xiaokang Li , Wei Zhou\",\"doi\":\"10.1016/j.neunet.2025.107695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the widespread deployment of recommendation systems, concerns have grown over algorithmic fairness and representation bias in recommendation outcomes. Existing debiasing methods primarily suffer from two critical limitations: (1) Explicit feature removal strategies risk eliminating semantic signals entangled with sensitive attributes, inevitably degrading recommendation performance. (2) Conventional adversarial learning frameworks impose rigid gradient reversal to enforce independence from sensitive attributes, yet cause semantic distortion in latent representations through uncontrolled adversarial conflicts between fairness objectives and recommendation goals.</div><div>To address these challenges, we propose a fairness-aware recommendation framework leveraging the dynamic equilibrium of diffusion model. During the forward diffusion process, we introduce adaptive gradient-aware noise injection, where fairness discriminators from the reverse denoising process guide Gaussian perturbations through their aggregated gradient statistics, achieving feature-aware bias dissociation while preserving user interest semantics. The reverse denoising process employs adversarial regularization with sensitivity-aware gradient constraints, iteratively purifying recommendation-oriented embeddings through alternating optimization of denoising prediction and fairness discrimination tasks. To further enhance fairness-utility tradeoffs, we design an interest fusion mechanism at denoising initialization and develop a bias-controlled rounding function for candidate generation. Extensive experiments on three real-world datasets with sensitive attributes demonstrate that our model outperforms state-of-the-art methods in recommendation accuracy and fairness. We publish the source code at <span><span>https://github.com/YangRan993/DiffuFair</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"190 \",\"pages\":\"Article 107695\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025005751\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005751","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Adversarial regularized diffusion model for fair recommendations
With the widespread deployment of recommendation systems, concerns have grown over algorithmic fairness and representation bias in recommendation outcomes. Existing debiasing methods primarily suffer from two critical limitations: (1) Explicit feature removal strategies risk eliminating semantic signals entangled with sensitive attributes, inevitably degrading recommendation performance. (2) Conventional adversarial learning frameworks impose rigid gradient reversal to enforce independence from sensitive attributes, yet cause semantic distortion in latent representations through uncontrolled adversarial conflicts between fairness objectives and recommendation goals.
To address these challenges, we propose a fairness-aware recommendation framework leveraging the dynamic equilibrium of diffusion model. During the forward diffusion process, we introduce adaptive gradient-aware noise injection, where fairness discriminators from the reverse denoising process guide Gaussian perturbations through their aggregated gradient statistics, achieving feature-aware bias dissociation while preserving user interest semantics. The reverse denoising process employs adversarial regularization with sensitivity-aware gradient constraints, iteratively purifying recommendation-oriented embeddings through alternating optimization of denoising prediction and fairness discrimination tasks. To further enhance fairness-utility tradeoffs, we design an interest fusion mechanism at denoising initialization and develop a bias-controlled rounding function for candidate generation. Extensive experiments on three real-world datasets with sensitive attributes demonstrate that our model outperforms state-of-the-art methods in recommendation accuracy and fairness. We publish the source code at https://github.com/YangRan993/DiffuFair.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.