Paula G. Duran, Pere Gilabert, Santi Seguí, Jordi Vitrià
{"title":"Overcoming Diverse Undesired Effects in Recommender Systems: A Deontological Approach","authors":"Paula G. Duran, Pere Gilabert, Santi Seguí, Jordi Vitrià","doi":"10.1145/3643857","DOIUrl":null,"url":null,"abstract":"<p>In today’s digital landscape, recommender systems have gained ubiquity as a means of directing users towards personalized products, services, and content. However, despite their widespread adoption and a long track of research, these systems are not immune to shortcomings. A significant challenge faced by recommender systems is the presence of biases, which produces various undesirable effects, prominently the popularity bias. This bias hampers the diversity of recommended items, thus restricting users’ exposure to less popular or niche content. Furthermore, this issue is compounded when multiple stakeholders are considered, requiring the balance of multiple, potentially conflicting objectives. </p><p>In this paper, we present a new approach to address a wide range of undesired consequences in recommender systems that involve various stakeholders. Instead of adopting a consequentialist perspective that aims to mitigate the repercussions of a recommendation policy, we propose a deontological approach centered around a minimal set of ethical principles. More precisely, we introduce two distinct principles aimed at avoiding overconfidence in predictions and accurately modeling the genuine interests of users. The proposed approach circumvents the need for defining a multi-objective system, which has been identified as one of the main limitations when developing complex recommenders. Through extensive experimentation, we show the efficacy of our approach in mitigating the adverse impact of the recommender from both user and item perspectives, ultimately enhancing various beyond accuracy metrics. This study underscores the significance of responsible and equitable recommendations and proposes a strategy that can be easily deployed in real-world scenarios.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Intelligent Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3643857","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In today’s digital landscape, recommender systems have gained ubiquity as a means of directing users towards personalized products, services, and content. However, despite their widespread adoption and a long track of research, these systems are not immune to shortcomings. A significant challenge faced by recommender systems is the presence of biases, which produces various undesirable effects, prominently the popularity bias. This bias hampers the diversity of recommended items, thus restricting users’ exposure to less popular or niche content. Furthermore, this issue is compounded when multiple stakeholders are considered, requiring the balance of multiple, potentially conflicting objectives.
In this paper, we present a new approach to address a wide range of undesired consequences in recommender systems that involve various stakeholders. Instead of adopting a consequentialist perspective that aims to mitigate the repercussions of a recommendation policy, we propose a deontological approach centered around a minimal set of ethical principles. More precisely, we introduce two distinct principles aimed at avoiding overconfidence in predictions and accurately modeling the genuine interests of users. The proposed approach circumvents the need for defining a multi-objective system, which has been identified as one of the main limitations when developing complex recommenders. Through extensive experimentation, we show the efficacy of our approach in mitigating the adverse impact of the recommender from both user and item perspectives, ultimately enhancing various beyond accuracy metrics. This study underscores the significance of responsible and equitable recommendations and proposes a strategy that can be easily deployed in real-world scenarios.
期刊介绍:
ACM Transactions on Intelligent Systems and Technology is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world.
ACM TIST is published quarterly (six issues a year). Each issue has 8-11 regular papers, with around 20 published journal pages or 10,000 words per paper. Additional references, proofs, graphs or detailed experiment results can be submitted as a separate appendix, while excessively lengthy papers will be rejected automatically. Authors can include online-only appendices for additional content of their published papers and are encouraged to share their code and/or data with other readers.