Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin
{"title":"交互式反事实探索推荐系统中的算法危害","authors":"Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin","doi":"arxiv-2409.06916","DOIUrl":null,"url":null,"abstract":"Recommender systems have become integral to digital experiences, shaping user\ninteractions and preferences across various platforms. Despite their widespread\nuse, these systems often suffer from algorithmic biases that can lead to unfair\nand unsatisfactory user experiences. This study introduces an interactive tool\ndesigned to help users comprehend and explore the impacts of algorithmic harms\nin recommender systems. By leveraging visualizations, counterfactual\nexplanations, and interactive modules, the tool allows users to investigate how\nbiases such as miscalibration, stereotypes, and filter bubbles affect their\nrecommendations. Informed by in-depth user interviews, this tool benefits both\ngeneral users and researchers by increasing transparency and offering\npersonalized impact assessments, ultimately fostering a better understanding of\nalgorithmic biases and contributing to more equitable recommendation outcomes.\nThis work provides valuable insights for future research and practical\napplications in mitigating bias and enhancing fairness in machine learning\nalgorithms.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems\",\"authors\":\"Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin\",\"doi\":\"arxiv-2409.06916\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems have become integral to digital experiences, shaping user\\ninteractions and preferences across various platforms. Despite their widespread\\nuse, these systems often suffer from algorithmic biases that can lead to unfair\\nand unsatisfactory user experiences. This study introduces an interactive tool\\ndesigned to help users comprehend and explore the impacts of algorithmic harms\\nin recommender systems. By leveraging visualizations, counterfactual\\nexplanations, and interactive modules, the tool allows users to investigate how\\nbiases such as miscalibration, stereotypes, and filter bubbles affect their\\nrecommendations. Informed by in-depth user interviews, this tool benefits both\\ngeneral users and researchers by increasing transparency and offering\\npersonalized impact assessments, ultimately fostering a better understanding of\\nalgorithmic biases and contributing to more equitable recommendation outcomes.\\nThis work provides valuable insights for future research and practical\\napplications in mitigating bias and enhancing fairness in machine learning\\nalgorithms.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06916\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06916","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems
Recommender systems have become integral to digital experiences, shaping user
interactions and preferences across various platforms. Despite their widespread
use, these systems often suffer from algorithmic biases that can lead to unfair
and unsatisfactory user experiences. This study introduces an interactive tool
designed to help users comprehend and explore the impacts of algorithmic harms
in recommender systems. By leveraging visualizations, counterfactual
explanations, and interactive modules, the tool allows users to investigate how
biases such as miscalibration, stereotypes, and filter bubbles affect their
recommendations. Informed by in-depth user interviews, this tool benefits both
general users and researchers by increasing transparency and offering
personalized impact assessments, ultimately fostering a better understanding of
algorithmic biases and contributing to more equitable recommendation outcomes.
This work provides valuable insights for future research and practical
applications in mitigating bias and enhancing fairness in machine learning
algorithms.