Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
Positionality-aware machine learning: translation tutorial 位置感知机器学习:翻译教程
Christine Kaeser-Chen, Elizabeth Dubois, Friederike Schuur, E. Moss
{"title":"Positionality-aware machine learning: translation tutorial","authors":"Christine Kaeser-Chen, Elizabeth Dubois, Friederike Schuur, E. Moss","doi":"10.1145/3351095.3375666","DOIUrl":"https://doi.org/10.1145/3351095.3375666","url":null,"abstract":"Positionality is a person's unique and always partial view of the world which is shaped by social and political contexts. Machine Learning (ML) systems have positionality, too, as a consequence of the choices we make when we develop ML systems. Being positionality-aware is key for ML practitioners to acknowledge and embrace the necessary choices embedded in ML by its creators. When groups form a shared view of the world, or group positionality, they have the power to embed and institutionalize their unique perspectives in artifacts such as standards and ontologies. For example, the international standard for reporting diseases and health conditions (International Classification of Diseases, ICD) is shaped by a distinctly medical, European and North American perspective. It dictates how we collect data, and limits what questions we can ask of data and what ML systems we can develop. Researchers struggle to study the effects of social factors on health outcomes because of what the ICD renders legible (usually in medicalized terms) and what it renders invisible (usually social contexts) in data. The ICD, as with all information infrastructures, promotes and propagates the perspective(s) of its creators. Over time, it establishes what counts as \"truth\". Positionality, and how it embeds itself in standards, ontologies, and data collection, is the root for bias in our data and algorithms. Every perspective has its limits - there is no view from nowhere. Without an awareness of positionality, the current debate on bias in machine learning is quite limited: adding more data to the set cannot remove bias. Instead, we propose positionality-aware ML, a new workflow focused on continuous evaluation and improvement of the fit between the positionality embedded in ML systems and the scenarios within which it is deployed. To demonstrate how to uncover positionality in standards, ontologies, data, and ML systems, we discuss recent work on online harassment of Canadian journalists and politicians on Twitter. Using legal definitions of hate speech and harassment, Twitter's community standards, and insight from interviews with journalists and politicians, we created standards and annotation guidelines for labeling the intensity of harassment in tweets. We then hand labeled a sample of data and through this process identified instances where positionality impacts choices about how many categories of harassment should exist, how to label boundary cases, and how to interpret messy data. We take three perspectives---technical, systems, socio-technical---that when combined illuminate areas of tension which serve as a signal of misalignment between the positionality embedded in the ML system and the deployment context. We demonstrate how the concept of positionality allows us to delineate sets of use cases that may not be suited for automated, ML solutions. Finally, we discuss strategies for developing positionality-aware ML systems, which embed a positionality appropria","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128370993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CtrlZ.AI zine fair: critical perspectives CtrlZ。AI杂志博览会:批判性的观点
A. Hanna, Emily L. Denton
{"title":"CtrlZ.AI zine fair: critical perspectives","authors":"A. Hanna, Emily L. Denton","doi":"10.1145/3351095.3375692","DOIUrl":"https://doi.org/10.1145/3351095.3375692","url":null,"abstract":"The FAT* conference has begun the necessary conversation on the normative implications and ethical ramifications of sociotechnical systems. However, many scholars have pointed to the limitations in methodologies and scope of analysis (e.g. [8, 11]). In addition to these critiques, we add in the fact that those who are most affected by this technology do not have the skills, training, or technical aptitude to participate in these conversations. With the exception of the 2018 FAT* tutorial which featured Terrance Wilkerson (who had been labeled as likely to highly recidivate by COMPAS) and his partner, there has been silence from those most impacted by algorithmic unfairness at FAT*. This silence has been deafening, as FAT* conversations - with a few notable exceptions (e.g. [1, 4]) - have failed to discuss anti-racist politics, prison abolition, and social justice.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117054377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The social lives of generative adversarial networks 生成对抗网络的社会生活
Michael Castelle
{"title":"The social lives of generative adversarial networks","authors":"Michael Castelle","doi":"10.1145/3351095.3373156","DOIUrl":"https://doi.org/10.1145/3351095.3373156","url":null,"abstract":"Generative adversarial networks (GANs) are a genre of deep learning model of significant practical and theoretical interest for their facility in producing photorealistic 'fake' images which are plausibly similar, but not identical, to a corpus of training data. But from the perspective of a sociologist, the distinctive architecture of GANs is highly suggestive. First, a convolutional neural network for classification, on its own, is (at present) popularly considered to be an 'AI'; and a generative neural network is a kind of inversion of such a classification network (i.e. a layered transformation from a vector of numbers to an image, as opposed to a transformation from an image to a vector of numbers). If, then, in the training of GANs, these two 'AIs' interact with each other in a dyadic fashion, shouldn't we consider that form of learning... social? This observation can lead to some surprising associations as we compare and contrast GANs with the theories of the sociologist Pierre Bourdieu, whose concept of the so-called habitus is one which is simultaneously cognitive and social: a productive perception in which classification practices and practical action cannot be fully disentangled. Bourdieu had long been concerned with the reproduction of social stratification: his early works studied formal public schooling in France not as an egalitarian system but instead as one which unintentionally maintained the persistence of class distinctions. It was, he argued, through the cultural inculcation of an embodied and partially unconscious habitus---a \"durably installed generative principle of regulated improvisations\"---that, he argued, students from the upper classes are given an advantage which is only further reinforced throughout their educational trajectories. For Bourdieu, institutions of schooling instill \"deeply interiorized master patterns\" of behavior and thought (and classification) which in turn direct the acquisition of subsequent patterns, whose character is determined not simply by this cognitive layering but by their actual use in lived practice, especially early in childhood development. In this work I develop a productive analogy between the GAN architecture and Bourdieu's habitus, in three ways. First, I call attention to the fact that connectionist approaches and Bourdieu's theories were both conceived as revolts against rule-bound paradigms. In the 1980s, Rumelhart and McClelland used a multilayer neural network to learn the phonology of English past-tense verbs because \"sometimes we don't follow the rules... language is full of exceptions to the rules\"; and in the case of Bourdieu, the habitus was an answer to a long-standing question: \"how can behaviour be regulated without being the product of obedience to rules?\" Bourdieu strove to transgress what was then seen in the social sciences as a conceptual opposition between structure-based theories of social life and those which emphasized an embodied agency. Second, I suggest th","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Algorithmic targeting of social policies: fairness, accuracy, and distributed governance 社会政策的算法目标:公平、准确和分布式治理
Alejandro Noriega-Campero, Bernardo Garcia-Bulle, L. F. Cantu, Michiel A. Bakker, Luis Tejerina, A. Pentland
{"title":"Algorithmic targeting of social policies: fairness, accuracy, and distributed governance","authors":"Alejandro Noriega-Campero, Bernardo Garcia-Bulle, L. F. Cantu, Michiel A. Bakker, Luis Tejerina, A. Pentland","doi":"10.1145/3351095.3375784","DOIUrl":"https://doi.org/10.1145/3351095.3375784","url":null,"abstract":"Targeted social policies are the main strategy for poverty alleviation across the developing world. These include targeted cash transfers (CTs), as well as targeted subsidies in health, education, housing, energy, childcare, and others. Due to the scale, diversity, and widespread relevance of targeted social policies like CTs, the algorithmic rules that decide who is eligible to benefit from them---and who is not---are among the most important algorithms operating in the world today. Here we report on a year-long engagement towards improving social targeting systems in a couple of developing countries. We demonstrate that a shift towards the use of AI methods in poverty-based targeting can substantially increase accuracy, extending the coverage of the poor by nearly a million people in two countries, without increasing expenditure. However, we also show that, absent explicit parity constraints, both status quo and AI-based systems induce disparities across population subgroups. Moreover, based on qualitative interviews with local social institutions, we find a lack of consensus on normative standards for prioritization and fairness criteria. Hence, we close by proposing a decision-support platform for distributed governance, which enables a diversity of institutions to customize the use of AI-based insights into their targeting decisions.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116332879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial 使用librec-auto:动手教程进行公平推荐的实验
R. Burke, M. Mansoury, Nasim Sonboli
{"title":"Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial","authors":"R. Burke, M. Mansoury, Nasim Sonboli","doi":"10.1145/3351095.3375670","DOIUrl":"https://doi.org/10.1145/3351095.3375670","url":null,"abstract":"The field of machine learning fairness has developed metrics, methodologies, and data sets for experimenting with classification algorithms. However, equivalent research is lacking in the area of personalized recommender systems. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation, and metrics and methodologies in evaluating recommendation fairness. Participants will also gain hands-on experience with conducting fairness-aware recommendation experiments with the LibRec recommendation system using the libauto{} scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fairness, accountability, transparency in AI at scale: lessons from national programs 大规模人工智能的公平、问责和透明度:来自国家项目的经验教训
M. Ahmad, A. Teredesai, C. Eckert
{"title":"Fairness, accountability, transparency in AI at scale: lessons from national programs","authors":"M. Ahmad, A. Teredesai, C. Eckert","doi":"10.1145/3351095.3375690","DOIUrl":"https://doi.org/10.1145/3351095.3375690","url":null,"abstract":"The panel aims to elucidate how different national govenmental programs are implementing accountability of machine learning systems in healthcare and how accountability is operationlized in different cultural settings in legislation, policy and deployment. We have representatives from three different govenments, UAE, Singapore and Maldives who will discuss what accountability of AI and machine learning means in their contexts and use cases. We hope to have a fruitful conversation around FAT ML as it is operationalized ccross cultures, national boundries and legislative constraints.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120829430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial 两名计算机科学家和一名文化科学家被无人驾驶汽车撞:机器学习中F-A-T跨学科研究中的知识定位方法:翻译教程
M. I. Ganesh, F. Dechesne, Zeerak Talat
{"title":"Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial","authors":"M. I. Ganesh, F. Dechesne, Zeerak Talat","doi":"10.1145/3351095.3375663","DOIUrl":"https://doi.org/10.1145/3351095.3375663","url":null,"abstract":"In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer scientists, artists, activists and social and cultural scientists collectively read a computer science paper about 'improving fairness'. This session was perceived by many participants as eye-opening on how different epistemologies shape approaches to the problem, method and solutions, thus enabling further cross-disciplinary discussions during the rest of the workshop. For many participants it was both refreshing and challenging, in equal measure, to understand how another discipline approached the problem of fairness. Now, as a follow-up we propose a translation tutorial that will engage participants at the FAT* conference in a similar exercise. We will invite participants to work in small groups reading excerpts of academic papers from different disciplinary perspectives on the same theme. We argue that most of us do not read outside our disciplines and thus are not familiar with how the same issues might be framed and addressed by our peers. Thus the purpose will be to have participants reflect on the different genealogies of knowledge in research, and how they erect walls, or generate opportunities for more productive inter-disciplinary work. We argue that addressing, through technical measures or otherwise, matters of ethics, bias and discrimination in AI/ML technologies in society is complicated by the different constructions of knowledge about what ethics (or bias or discrimination) means to different groups of practitioners. In the current academic structure, there are scarce resources to test, build on-or even discard-methods to talk across disciplinary lines. This tutorial is thus proposed to see if this particular method might work.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121457210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability Dirichlet不确定性包装可操作的算法,准确性,可问责性和可审计性
José Mena Roldán, O. Vila, J. V. Marca
{"title":"Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability","authors":"José Mena Roldán, O. Vila, J. V. Marca","doi":"10.1145/3351095.3372825","DOIUrl":"https://doi.org/10.1145/3351095.3372825","url":null,"abstract":"Nowadays, the use of machine learning models is becoming a utility in many applications. Companies deliver pre-trained models encapsulated as application programming interfaces (APIs) that developers combine with third-party components and their own models and data to create complex data products to solve specific problems. The complexity of such products and the lack of control and knowledge of the internals of each component used unavoidable cause effects, such as lack of transparency, difficulty in auditability, and the emergence of potential uncontrolled risks. They are effectively black-boxes. Accountability of such solutions is a challenge for the auditors and the machine learning community. In this work, we propose a wrapper that given a black-box model enriches its output prediction with a measure of uncertainty when applied to a target domain. To develop the wrapper, we follow these steps: Modeling the distribution of the output. In a text classification setting, the output is a probability distribution p(y|X, w*) over the different classes to predict, y, given an input text X and the pre-trained model with parameters w*. We model this output by a random variable to measure the variability that the data noise causes in the output. Here we consider the output distribution coming from a Dirichlet probability density function, thus p(y|X, w*) ~ Dir(α). Decomposition of the Dirichlet concentration parameter. To relate the output of the classifier with the concentration parameter in the Dirichlet distribution, we propose a decomposition of the concentration parameter in two terms: α = βy. The role of this scalar β is to control the spread of the distribution around the expected value, i.e. the original prediction y. Training the wrapper. Sentences are represented as the average value of their word embeddings. This representation feeds a neural network that outputs a single regression value that models the parameter β. For each input, we combine β and the black-box prediction to obtain the corresponding distribution for the output ym,i ~ Dir(αi). By using Monte Carlo sampling, we approximate the expected value of the classification probabilities, [EQUATION] and we train the model applying a cross-entropy loss over the predictions and the labels. Obtaining an uncertainty score from the wrapper. To obtain a numerical value for the uncertainty of a prediction, we draw samples from the resulting Dir(α) to evaluate the predictive entropy with [EQUATION], thus obtaining a numerical score for the uncertainty of each prediction. Using uncertainty for rejection. Based on this wrapper, we provide an actionable mechanism to mitigate risk in the form of decision rejection: once equipped with a value for the uncertainty of a given prediction, we can choose not to issue that prediction when the risk or uncertainty in that decision is significant. This results in a rejection system that selects the more confident predictions, discards those more uncertain, a","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116047132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Policy 101: an introduction to public policymaking in the EU and US 《政策101:欧盟和美国公共政策制定概论》
Natasha Duarte, Stan Adams
{"title":"Policy 101: an introduction to public policymaking in the EU and US","authors":"Natasha Duarte, Stan Adams","doi":"10.1145/3351095.3375668","DOIUrl":"https://doi.org/10.1145/3351095.3375668","url":null,"abstract":"Navigating the rules, processes, and venues through which public policy is made can seem daunting. But public participation in these processes is a crucial part of democratic governance. With a general understanding of when, where, and how to engage in policymaking, anyone can become a policy advocate. This tutorial will introduce some of the most common US (federal and state) and EU policymaking processes and provide guidance to experts in other domains (such as data and computer science) who want to get involved in policymaking. We will discuss the practical considerations involved in identifying and choosing among policymaking opportunities and discuss how to maximize the impact of policymaking interventions. This tutorial is intended to be interactive and will be improved by audience participation and questions.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133462306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Centering disability perspectives in algorithmic fairness, accountability, & transparency 以算法公平、问责制和透明度为中心的残疾视角
Alexandra Reeve Givens, M. Morris
{"title":"Centering disability perspectives in algorithmic fairness, accountability, & transparency","authors":"Alexandra Reeve Givens, M. Morris","doi":"10.1145/3351095.3375686","DOIUrl":"https://doi.org/10.1145/3351095.3375686","url":null,"abstract":"It is vital to consider the unique risks and impacts of algorithmic decision-making for people with disabilities. The diverse nature of potential disabilities poses unique challenges for approaches to fairness, accountability, and transparency. Many disabled people choose not to disclose their disabilities, making auditing and accountability tools particularly hard to design and operate. Further, the variety inherent in disability poses challenges for collecting representative training data in any quantity sufficient to better train more inclusive and accountable algorithms. This panel highlights areas of concern, present emerging research efforts, and enlist more researchers and advocates to study the potential impacts of algorithmic decision-making on people with disabilities. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity. In the technology space, discussion topics will include methods to assess the fairness of current AI systems, and strategies to develop new systems and bias mitigation approaches that ensure fairness for people with disabilities. For example, how do today's currently-deployed AI systems impact people with disabilities? If developing inclusive datasets is part of the solution, how can researchers ethically gather such data, and what risks might centralizing data about disability pose? What new privacy solutions must developers create to reduce the risk of deductive disclosure of identities of people with disabilities in \"anonymized\" datasets? How can AI models and bias mitigation techniques be developed that handle the unique challenges of disability, i.e., the \"long tail\" and low incidence of many types of disability - for instance, how do we ensure that data about disability are not treated as outliers? What are the pros and cons of developing custom/personalized AI models for people with disabilities versus ensuring that general models are inclusive? In the law and policy space, the framework for people with disabilities requires specific study. For example, the Americans with Disabilities Act (ADA) requires employers to adopt \"reasonable accommodations\" for qualified individuals with a disability. But what is a \"reasonable accommodation\" in the context of machine learning and AI? How will the ADA's unique standards interact with case law and scholarship about algorithmic bias against other protected groups? When the ADA governs what questions employers can ask about a candidate's disability, and HIPAA and the Genetic Information Privacy Act regulate the sharing of health information, how should we think about inferences from data that approximate such questions? Panelists will bring varied perspectives to this conversation, including backgrounds in computer science, disability studies, legal studies, and activism. In addition to their sc","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132712211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信