Daniele Malitesta, Giacomo Medda, Erasmo Purificato, Ludovico Boratto, Fragkiskos D. Malliaros, Mirko Marras, Ernesto William De Luca
{"title":"How Fair is Your Diffusion Recommender Model?","authors":"Daniele Malitesta, Giacomo Medda, Erasmo Purificato, Ludovico Boratto, Fragkiskos D. Malliaros, Mirko Marras, Ernesto William De Luca","doi":"arxiv-2409.04339","DOIUrl":null,"url":null,"abstract":"Diffusion-based recommender systems have recently proven to outperform\ntraditional generative recommendation approaches, such as variational\nautoencoders and generative adversarial networks. Nevertheless, the machine\nlearning literature has raised several concerns regarding the possibility that\ndiffusion models, while learning the distribution of data samples, may\ninadvertently carry information bias and lead to unfair outcomes. In light of\nthis aspect, and considering the relevance that fairness has held in\nrecommendations over the last few decades, we conduct one of the first fairness\ninvestigations in the literature on DiffRec, a pioneer approach in\ndiffusion-based recommendation. First, we propose an experimental setting\ninvolving DiffRec (and its variant L-DiffRec) along with nine state-of-the-art\nrecommendation models, two popular recommendation datasets from the\nfairness-aware literature, and six metrics accounting for accuracy and\nconsumer/provider fairness. Then, we perform a twofold analysis, one assessing\nmodels' performance under accuracy and recommendation fairness separately, and\nthe other identifying if and to what extent such metrics can strike a\nperformance trade-off. Experimental results from both studies confirm the\ninitial unfairness warnings but pave the way for how to address them in future\nresearch directions.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04339","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Diffusion-based recommender systems have recently proven to outperform
traditional generative recommendation approaches, such as variational
autoencoders and generative adversarial networks. Nevertheless, the machine
learning literature has raised several concerns regarding the possibility that
diffusion models, while learning the distribution of data samples, may
inadvertently carry information bias and lead to unfair outcomes. In light of
this aspect, and considering the relevance that fairness has held in
recommendations over the last few decades, we conduct one of the first fairness
investigations in the literature on DiffRec, a pioneer approach in
diffusion-based recommendation. First, we propose an experimental setting
involving DiffRec (and its variant L-DiffRec) along with nine state-of-the-art
recommendation models, two popular recommendation datasets from the
fairness-aware literature, and six metrics accounting for accuracy and
consumer/provider fairness. Then, we perform a twofold analysis, one assessing
models' performance under accuracy and recommendation fairness separately, and
the other identifying if and to what extent such metrics can strike a
performance trade-off. Experimental results from both studies confirm the
initial unfairness warnings but pave the way for how to address them in future
research directions.