Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
Towards a more representative politics in the ethics of computer science 在计算机科学伦理中走向更具代表性的政治
Jared Moore
{"title":"Towards a more representative politics in the ethics of computer science","authors":"Jared Moore","doi":"10.1145/3351095.3372854","DOIUrl":"https://doi.org/10.1145/3351095.3372854","url":null,"abstract":"Ethics curricula in computer science departments should include a focus on the political action of students. While 'ethics' holds significant sway over current discourse in computer science, recent work, particularly in data science, has shown that this discourse elides the underlying political nature of the problems that it aims to solve. In order to avoid these pitfalls---such as co-option, whitewashing, and assumed universal values---we should recognize and teach the political nature of computing technologies, largely through science and technology studies. Education is an essential focus not just intrinsically, but also because computing students end up joining the companies which have outsize impacts on our lives. At those companies, students both have a responsibility to society and agency beyond just engineering decisions, albeit not uniformly. I propose that we move away from strict ethics curricula and include examples of and calls for political action of students and future engineers. Through such examples---calls to action, practitioner reflections, legislative engagement, direct action---we might allow engineers to better recognize both their diverse agencies and responsibilities.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125316935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making 置信度和解释对人工智能辅助决策准确性和信任校准的影响
Yunfeng Zhang, Q. Liao, R. Bellamy
{"title":"Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making","authors":"Yunfeng Zhang, Q. Liao, R. Bellamy","doi":"10.1145/3351095.3372852","DOIUrl":"https://doi.org/10.1145/3351095.3372852","url":null,"abstract":"Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key to their success is to appropriately calibrate human trust in the AI on a case-by-case basis; knowing when to trust or distrust the AI allows the human expert to appropriately apply their knowledge, improving decision outcomes in cases where the model is likely to perform poorly. This research conducts a case study of AI-assisted decision making in which humans and AI have comparable performance alone, and explores whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI. Specifically, we study the effect of showing confidence score and local explanation for a particular prediction. Through two human experiments, we show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors. We also highlight the problems in using local explanation for AI-assisted decision making scenarios and invite the research community to explore new approaches to explainability for calibrating human trust in AI.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116910431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 369
Onward for the freedom of others: marching beyond the AI ethics 为他人的自由而前进:超越人工智能伦理
P. Terzis
{"title":"Onward for the freedom of others: marching beyond the AI ethics","authors":"P. Terzis","doi":"10.1145/3351095.3373152","DOIUrl":"https://doi.org/10.1145/3351095.3373152","url":null,"abstract":"The debate on the ethics of Artificial Intelligence brought together different stakeholders including but not limited to academics, policymakers, CEOs, activists, workers' representatives, lobbyists, journalists, and 'moral machines'. Prominent political institutions crafted principles for the 'ethical being' of the AI companies while tech giants were documenting ethics in a series of self-written guidelines. In parallel, a large community started to flourish, focusing on how to technically embed ethical parameters into algorithmic systems. Founded upon the philosophical work of Simone de Beauvoir and Jean-Paul Sartre, this paper explores the philosophical antinomies of the 'AI Ethics' debate as well as the conceptual disorientation of the 'fairness discussion'. By bringing the philosophy of existentialism to the dialogue, this paper attempts to challenge the dialectical monopoly of utilitarianism and sheds fresh light on the -already glaring- AI arena. Why is 'the AI Ethics guidelines' a futile battle doomed to dangerous abstraction? How this battle can harm our sense of collective freedom? Which is the uncomfortable reality that remains obscured by the smoke-gas of the 'AI Ethics' discussion? And eventually, what's the alternative? There seems to be a different pathway for discussing and implementing ethics; A pathway that sets the freedom of others at the epicenter of the battle for a sustainable and open to all future.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing 缩小人工智能问责制差距:为内部算法审计定义端到端框架
Inioluwa Deborah Raji, A. Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, B. Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes
{"title":"Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing","authors":"Inioluwa Deborah Raji, A. Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, B. Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes","doi":"10.1145/3351095.3372873","DOIUrl":"https://doi.org/10.1145/3351095.3372873","url":null,"abstract":"Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development life-cycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization's values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130306563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 457
Lessons from archives: strategies for collecting sociocultural data in machine learning 从档案中吸取教训:机器学习中收集社会文化数据的策略
Eun Seo Jo, Timnit Gebru
{"title":"Lessons from archives: strategies for collecting sociocultural data in machine learning","authors":"Eun Seo Jo, Timnit Gebru","doi":"10.1145/3351095.3372829","DOIUrl":"https://doi.org/10.1145/3351095.3372829","url":null,"abstract":"A growing body of work shows that many problems in fairness, accountability, transparency, and ethics in machine learning systems are rooted in decisions surrounding the data collection and annotation process. In spite of its fundamental nature however, data collection remains an overlooked part of the machine learning (ML) pipeline. In this paper, we argue that a new specialization should be formed within ML that is focused on methodologies for data collection and annotation: efforts that require institutional frameworks and procedures. Specifically for sociocultural data, parallels can be drawn from archives and libraries. Archives are the longest standing communal effort to gather human information and archive scholars have already developed the language and procedures to address and discuss many challenges pertaining to data collection such as consent, power, inclusivity, transparency, and ethics & privacy. We discuss these five key approaches in document collection practices in archives that can inform data collection in sociocultural ML. By showing data collection practices from another field, we encourage ML research to be more cognizant and systematic in data collection and draw from interdisciplinary expertise.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 213
Recommendations and user agency: the reachability of collaboratively-filtered information 推荐和用户代理:协同过滤信息的可达性
Sarah Dean, Sarah Rich, B. Recht
{"title":"Recommendations and user agency: the reachability of collaboratively-filtered information","authors":"Sarah Dean, Sarah Rich, B. Recht","doi":"10.1145/3351095.3372866","DOIUrl":"https://doi.org/10.1145/3351095.3372866","url":null,"abstract":"Recommender systems often rely on models which are trained to maximize accuracy in predicting user preferences. When the systems are deployed, these models determine the availability of content and information to different users. The gap between these objectives gives rise to a potential for unintended consequences, contributing to phenomena such as filter bubbles and polarization. In this work, we consider directly the information availability problem through the lens of user recourse. Using ideas of reachability, we propose a computationally efficient audit for top-N linear recommender models. Furthermore, we describe the relationship between model complexity and the effort necessary for users to exert control over their recommendations. We use this insight to provide a novel perspective on the user cold-start problem. Finally, we demonstrate these concepts with an empirical investigation of a state-of-the-art model trained on a widely used movie ratings dataset.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129535446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Garbage in, garbage out?: do machine learning application papers in social computing report where human-labeled training data comes from? 垃圾进,垃圾出?社会化计算领域的机器学习应用论文是否报告了人工标记训练数据的来源?
R. Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, Jenny Huang
{"title":"Garbage in, garbage out?: do machine learning application papers in social computing report where human-labeled training data comes from?","authors":"R. Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, Jenny Huang","doi":"10.1145/3351095.3372862","DOIUrl":"https://doi.org/10.1145/3351095.3372862","url":null,"abstract":"Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a \"gold standard\" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130377028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy 面向更公平的数据集:过滤和平衡ImageNet层次结构中人员子树的分布
Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, Olga Russakovsky
{"title":"Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy","authors":"Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, Olga Russakovsky","doi":"10.1145/3351095.3375709","DOIUrl":"https://doi.org/10.1145/3351095.3375709","url":null,"abstract":"Computer vision technology is being used by many but remains representative of only a few. People have reported misbehavior of computer vision models, including offensive prediction results and lower performance for underrepresented groups. Current computer vision models are typically developed using datasets consisting of manually annotated images or videos; the data and label distributions in these datasets are critical to the models' behavior. In this paper, we examine ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods. We consider three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: (1) the stagnant concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of all categories with images, and (3) the inequality of representation in the images within concepts. We seek to illuminate the root causes of these concerns and take the first steps to mitigate them constructively.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126527757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 228
Artificial mental phenomena: psychophysics as a framework to detect perception biases in AI models 人工心理现象:在人工智能模型中检测感知偏差的心理物理学框架
Lizhen Liang, Daniel Ernesto Acuna
{"title":"Artificial mental phenomena: psychophysics as a framework to detect perception biases in AI models","authors":"Lizhen Liang, Daniel Ernesto Acuna","doi":"10.1145/3351095.3375623","DOIUrl":"https://doi.org/10.1145/3351095.3375623","url":null,"abstract":"Detecting biases in artificial intelligence has become difficult because of the impenetrable nature of deep learning. The central difficulty is in relating unobservable phenomena deep inside models with observable, outside quantities that we can measure from inputs and outputs. For example, can we detect gendered perceptions of occupations (e.g., female librarian, male electrician) using questions to and answers from a word embedding-based system? Current techniques for detecting biases are often customized for a task, dataset, or method, affecting their generalization. In this work, we draw from Psychophysics in Experimental Psychology---meant to relate quantities from the real world (i.e., \"Physics\") into subjective measures in the mind (i.e., \"Psyche\")---to propose an intellectually coherent and generalizable framework to detect biases in AI. Specifically, we adapt the two-alternative forced choice task (2AFC) to estimate potential biases and the strength of those biases in black-box models. We successfully reproduce previously-known biased perceptions in word embeddings and sentiment analysis predictions. We discuss how concepts in experimental psychology can be naturally applied to understanding artificial mental phenomena, and how psychophysics can form a useful methodological foundation to study fairness in AI.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129204905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On the apparent conflict between individual and group fairness 论个人公平与群体公平的明显冲突
Reuben Binns
{"title":"On the apparent conflict between individual and group fairness","authors":"Reuben Binns","doi":"10.1145/3351095.3372864","DOIUrl":"https://doi.org/10.1145/3351095.3372864","url":null,"abstract":"A distinction has been drawn in fair machine learning research between 'group' and 'individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict. First, it outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness, thereby suggesting that there need be no conflict in principle. Second, it considers the concept of individual justice, from legal philosophy and jurisprudence, which seems similar but actually contradicts the notion of individual fairness as proposed in the fair machine learning literature. The conclusion is that the apparent conflict between individual and group fairness is more of an artefact of the blunt application of fairness measures, rather than a matter of conflicting principles. In practice, this conflict may be resolved by a nuanced consideration of the sources of 'unfairness' in a particular deployment context, and the carefully justified application of measures to mitigate it.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127802532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 202
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信