Proceedings of the Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments SIREN:用于理解在线新闻环境中推荐系统影响的模拟框架
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287583
D. Bountouridis, Jaron Harambam, M. Makhortykh, M. Marrero, N. Tintarev, C. Hauff
{"title":"SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments","authors":"D. Bountouridis, Jaron Harambam, M. Makhortykh, M. Marrero, N. Tintarev, C. Hauff","doi":"10.1145/3287560.3287583","DOIUrl":"https://doi.org/10.1145/3287560.3287583","url":null,"abstract":"The growing volume of digital data stimulates the adoption of recommender systems in different socioeconomic domains, including news industries. While news recommenders help consumers deal with information overload and increase their engagement, their use also raises an increasing number of societal concerns, such as \"Matthew effects\", \"filter bubbles\", and the overall lack of transparency. We argue that focusing on transparency for content-providers is an under-explored avenue. As such, we designed a simulation framework called SIREN1 (SImulating Recommender Effects in online News environments), that allows content providers to (i) select and parameterize different recommenders and (ii) analyze and visualize their effects with respect to two diversity metrics. Taking the U.S. news media as a case study, we present an analysis on the recommender effects with respect to long-tail novelty and unexpectedness using SIREN. Our analysis offers a number of interesting findings, such as the similar potential of certain algorithmically simple (item-based k-Nearest Neighbour) and sophisticated strategies (based on Bayesian Personalized Ranking) to increase diversity over time. Overall, we argue that simulating the effects of recommender systems can help content providers to make more informed decisions when choosing algorithmic recommenders, and as such can help mitigate the aforementioned societal concerns.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89699350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Controlling Polarization in Personalization: An Algorithmic Framework 控制个性化中的极化:一个算法框架
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287601
L. E. Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi
{"title":"Controlling Polarization in Personalization: An Algorithmic Framework","authors":"L. E. Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi","doi":"10.1145/3287560.3287601","DOIUrl":"https://doi.org/10.1145/3287560.3287601","url":null,"abstract":"Personalization is pervasive in the online space as it leads to higher efficiency for the user and higher revenue for the platform by individualizing the most relevant content for each user. However, recent studies suggest that such personalization can learn and propagate systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms that are constrained to combat bias and the resulting echo-chamber effect. We propose a versatile framework that allows for the possibility to reduce polarization in personalized systems by allowing the user to constrain the distribution from which content is selected. We then present a scalable algorithm with provable guarantees that satisfies the given constraints on the types of the content that can be displayed to a user, but -- subject to these constraints -- will continue to learn and personalize the content in order to maximize utility. We illustrate this framework on a curated dataset of online news articles that are conservative or liberal, show that it can control polarization, and examine the trade-off between decreasing polarization and the resulting loss to revenue. We further exhibit the flexibility and scalability of our approach by framing the problem in terms of the more general diverse content selection problem and test it empirically on both a News dataset and the MovieLens dataset.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83245245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Fairness and Abstraction in Sociotechnical Systems 社会技术系统中的公平与抽象
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287598
Andrew D. Selbst, D. Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, J. Vertesi
{"title":"Fairness and Abstraction in Sociotechnical Systems","authors":"Andrew D. Selbst, D. Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, J. Vertesi","doi":"10.1145/3287560.3287598","DOIUrl":"https://doi.org/10.1145/3287560.3287598","url":null,"abstract":"A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science---such as abstraction and modular design---are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce \"fair\" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five \"traps\" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77045777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 641
The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism 计算机视觉的轮廓分析潜力和计算经验主义的挑战
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287568
Jake Goldenfein
{"title":"The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism","authors":"Jake Goldenfein","doi":"10.1145/3287560.3287568","DOIUrl":"https://doi.org/10.1145/3287560.3287568","url":null,"abstract":"Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80220759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing 超越开放与封闭:在数据共享中平衡个人隐私和公共责任
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287577
Meg Young, Luke Rodriguez, Emilyann Keller, Feiyang Sun, Boyang Sa, Jan Whittington, Bill Howe
{"title":"Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing","authors":"Meg Young, Luke Rodriguez, Emilyann Keller, Feiyang Sun, Boyang Sa, Jan Whittington, Bill Howe","doi":"10.1145/3287560.3287577","DOIUrl":"https://doi.org/10.1145/3287560.3287577","url":null,"abstract":"Data too sensitive to be \"open\" for analysis and re-purposing typically remains \"closed\" as proprietary information. This dichotomy undermines efforts to make algorithmic systems more fair, transparent, and accountable. Access to proprietary data in particular is needed by government agencies to enforce policy, researchers to evaluate methods, and the public to hold agencies accountable; all of these needs must be met while preserving individual privacy and firm competitiveness. In this paper, we describe an integrated legal-technical approach provided by a third-party public-private data trust designed to balance these competing interests. Basic membership allows firms and agencies to enable low-risk access to data for compliance reporting and core methods research, while modular data sharing agreements support a wide array of projects and use cases. Unless specifically stated otherwise in an agreement, all data access is initially provided to end users through customized synthetic datasets that offer a) strong privacy guarantees, b) removal of signals that could expose competitive advantage, and c) removal of biases that could reinforce discriminatory policies, all while maintaining fidelity to the original data. We find that using synthetic data in conjunction with strong legal protections over raw data strikes a balance between transparency, proprietorship, privacy, and research objectives. This legal-technical framework can form the basis for data trusts in a variety of contexts.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87680901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Who's the Guinea Pig?: Investigating Online A/B/n Tests in-the-Wild 谁是豚鼠?在野外调查在线A/B/n测试
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287565
Shan Jiang, John Martin, Christo Wilson
{"title":"Who's the Guinea Pig?: Investigating Online A/B/n Tests in-the-Wild","authors":"Shan Jiang, John Martin, Christo Wilson","doi":"10.1145/3287560.3287565","DOIUrl":"https://doi.org/10.1145/3287560.3287565","url":null,"abstract":"A/B/n testing has been adopted by many technology companies as a data-driven approach to product design and optimization. These tests are often run on their websites without explicit consent from users. In this paper, we investigate such online A/B/n tests by using Optimizely as a lens. First, we provide measurement results of 575 websites that use Optimizely drawn from the Alexa Top-1M, and analyze the distributions of their audiences and experiments. Then, we use three case studies to discuss potential ethical pitfalls of such experiments, including involvement of political content, price discrimination, and advertising campaigns. We conclude with a suggestion for greater awareness of ethical concerns inherent in human experimentation and a call for increased transparency among A/B/n test operators.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85562363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments 不同的相互作用:风险评估公平性的循环算法分析
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287563
Ben Green, Yiling Chen
{"title":"Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments","authors":"Ben Green, Yiling Chen","doi":"10.1145/3287560.3287563","DOIUrl":"https://doi.org/10.1145/3287560.3287563","url":null,"abstract":"Despite vigorous debates about the technical characteristics of risk assessments being deployed in the U.S. criminal justice system, remarkably little research has studied how these tools affect actual decision-making processes. After all, risk assessments do not make definitive decisions---they inform judges, who are the final arbiters. It is therefore essential that considerations of risk assessments be informed by rigorous studies of how judges actually interpret and use them. This paper takes a first step toward such research on human interactions with risk assessments through a controlled experimental study on Amazon Mechanical Turk. We found several behaviors that call into question the supposed efficacy and fairness of risk assessments: our study participants 1) underperformed the risk assessment even when presented with its predictions, 2) could not effectively evaluate the accuracy of their own or the risk assessment's predictions, and 3) exhibited behaviors fraught with \"disparate interactions,\" whereby the use of risk assessments led to higher risk predictions about black defendants and lower risk predictions about white defendants. These results suggest the need for a new \"algorithm-in-the-loop\" framework that places machine learning decision-making aids into the sociotechnical context of improving human decisions rather than the technical context of generating the best prediction in the abstract. If risk assessments are to be used at all, they must be grounded in rigorous evaluations of their real-world impacts instead of in their theoretical potential.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86888592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 200
Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism 机器人睁大眼睛:理解不诚实的拟人化
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287591
Brenda Leong, Evan Selinger
{"title":"Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism","authors":"Brenda Leong, Evan Selinger","doi":"10.1145/3287560.3287591","DOIUrl":"https://doi.org/10.1145/3287560.3287591","url":null,"abstract":"The goal of this paper is to advance design, policy, and ethics scholarship on how engineers and regulators can protect consumers from deceptive robots and artificial intelligences that exhibit the problem of dishonest anthropomorphism. The analysis expands upon ideas surrounding the principle of honest anthropomorphism originally formulated by Margot Kaminsky, Mathew Ruben, William D. Smart, and Cindy M. Grimm in their groundbreaking Maryland Law Review article, \"Averting Robot Eyes.\" Applying boundary management theory and philosophical insights into prediction and perception, we create a new taxonomy that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause. To demonstrate how the taxonomy can be applied as well as clarify the scope of the problems that it can cover, we critically consider a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphism has been violated.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83127589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Fair Allocation through Competitive Equilibrium from Generic Incomes 从一般收入看竞争均衡下的公平分配
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287582
Moshe Babaioff, N. Nisan, Inbal Talgam-Cohen
{"title":"Fair Allocation through Competitive Equilibrium from Generic Incomes","authors":"Moshe Babaioff, N. Nisan, Inbal Talgam-Cohen","doi":"10.1145/3287560.3287582","DOIUrl":"https://doi.org/10.1145/3287560.3287582","url":null,"abstract":"Two food banks catering to populations of different sizes with different needs must divide among themselves a donation of food items. What constitutes a \"fair\" allocation of the items among them? Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods among equally entitled agents [Foley 1967, Varian 1974]. Every agent (foodbank) receives an equal endowment of artificial currency with which to \"purchase\" bundles of goods (food items). Prices for the goods are set high enough such that the agents can simultaneously get their favorite within-budget bundle, and low enough such that all goods are allocated (no waste). A CEEI satisfies mathematical notions of fairness like fair-share, and also has built-in transparency -- prices can be published so the agents can verify they're being treated equally. However, a CEEI is not guaranteed to exist when the items are indivisible. We study competitive equilibrium from generic incomes (CEGI), which is based on the idea of slightly perturbed endowments, and enjoys similar fairness, efficiency and transparency properties as CEEI. We show that when the two agents have almost equal endowments and additive preferences for the items, a CEGI always exists. We then consider agents who are a priori non-equal (like different-sized foodbanks); we formulate a new notion of fair allocation among non-equals satisfied by CEGI, and show existence in cases of interest (like when the agents have identical preferences). Experiments on simulated and Spliddit data (a popular fair division website) indicate more general existence. Our results open opportunities for future research on fairness through generic endowments, and on fair treatment of non-equals.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79528978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting Bios中的偏见:一个高风险情境下语义表示偏见的案例研究
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-27 DOI: 10.1145/3287560.3287572
Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Kalai
{"title":"Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting","authors":"Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Kalai","doi":"10.1145/3287560.3287572","DOIUrl":"https://doi.org/10.1145/3287560.3287572","url":null,"abstract":"We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives. We analyze the potential allocation harms that can result from semantic representation bias. To do so, we study the impact on occupation classification of including explicit gender indicators---such as first names and pronouns---in different semantic representations of online biographies. Additionally, we quantify the bias that remains when these indicators are \"scrubbed,\" and describe proxy behavior that occurs in the absence of explicit gender indicators. As we demonstrate, differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74301859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 304
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信