交互式反事实探索推荐系统中的算法危害

Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin
{"title":"交互式反事实探索推荐系统中的算法危害","authors":"Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin","doi":"arxiv-2409.06916","DOIUrl":null,"url":null,"abstract":"Recommender systems have become integral to digital experiences, shaping user\ninteractions and preferences across various platforms. Despite their widespread\nuse, these systems often suffer from algorithmic biases that can lead to unfair\nand unsatisfactory user experiences. This study introduces an interactive tool\ndesigned to help users comprehend and explore the impacts of algorithmic harms\nin recommender systems. By leveraging visualizations, counterfactual\nexplanations, and interactive modules, the tool allows users to investigate how\nbiases such as miscalibration, stereotypes, and filter bubbles affect their\nrecommendations. Informed by in-depth user interviews, this tool benefits both\ngeneral users and researchers by increasing transparency and offering\npersonalized impact assessments, ultimately fostering a better understanding of\nalgorithmic biases and contributing to more equitable recommendation outcomes.\nThis work provides valuable insights for future research and practical\napplications in mitigating bias and enhancing fairness in machine learning\nalgorithms.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems\",\"authors\":\"Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin\",\"doi\":\"arxiv-2409.06916\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems have become integral to digital experiences, shaping user\\ninteractions and preferences across various platforms. Despite their widespread\\nuse, these systems often suffer from algorithmic biases that can lead to unfair\\nand unsatisfactory user experiences. This study introduces an interactive tool\\ndesigned to help users comprehend and explore the impacts of algorithmic harms\\nin recommender systems. By leveraging visualizations, counterfactual\\nexplanations, and interactive modules, the tool allows users to investigate how\\nbiases such as miscalibration, stereotypes, and filter bubbles affect their\\nrecommendations. Informed by in-depth user interviews, this tool benefits both\\ngeneral users and researchers by increasing transparency and offering\\npersonalized impact assessments, ultimately fostering a better understanding of\\nalgorithmic biases and contributing to more equitable recommendation outcomes.\\nThis work provides valuable insights for future research and practical\\napplications in mitigating bias and enhancing fairness in machine learning\\nalgorithms.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06916\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06916","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

推荐系统已成为数字体验不可或缺的一部分,它在各种平台上影响着用户的互动和偏好。尽管这些系统被广泛使用,但其算法往往存在偏差,可能导致不公平和不令人满意的用户体验。本研究介绍了一种交互式方法,旨在帮助用户理解和探索推荐系统中算法危害的影响。通过利用可视化、反事实解释和互动模块,该工具允许用户研究误判、刻板印象和过滤气泡等偏见如何影响他们的推荐。通过对用户的深入访谈,该工具提高了透明度并提供了个性化的影响评估,从而使普通用户和研究人员受益匪浅,最终促进了对算法偏差的更好理解,并有助于实现更公平的推荐结果。这项工作为减轻偏差和提高机器学习算法的公平性方面的未来研究和实际应用提供了宝贵的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems
Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, this tool benefits both general users and researchers by increasing transparency and offering personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信