Proceedings of the Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms 来自南方的算法透明度:考察智利公共行政算法的算法透明度状况
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2023-01-01 DOI: 10.1145/3593013.3593991
José Pablo Lapostol Piderit, Romina Garrido Iglesias, María Paz Hermosilla Cornejo
{"title":"Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms","authors":"José Pablo Lapostol Piderit, Romina Garrido Iglesias, María Paz Hermosilla Cornejo","doi":"10.1145/3593013.3593991","DOIUrl":"https://doi.org/10.1145/3593013.3593991","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86082880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021 《中国计算机学会公平、问责与透明度研讨会》,虚拟会议/加拿大多伦多,2021年3月3-10日
{"title":"FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021","authors":"","doi":"10.1145/3442188","DOIUrl":"https://doi.org/10.1145/3442188","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89792355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Transparency universal 透明度普遍
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-5
Rachel Adams
{"title":"Transparency universal","authors":"Rachel Adams","doi":"10.4324/9780429340819-5","DOIUrl":"https://doi.org/10.4324/9780429340819-5","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80774647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resisting transparency 抵制透明度
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-10
Rachel Adams
{"title":"Resisting transparency","authors":"Rachel Adams","doi":"10.4324/9780429340819-10","DOIUrl":"https://doi.org/10.4324/9780429340819-10","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85316296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conclusion 结论
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-11
Rachel Adams
{"title":"Conclusion","authors":"Rachel Adams","doi":"10.4324/9780429340819-11","DOIUrl":"https://doi.org/10.4324/9780429340819-11","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88247144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People 在指导7000万人健康决策的算法中剖析种族偏见
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287593
Z. Obermeyer, S. Mullainathan
{"title":"Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People","authors":"Z. Obermeyer, S. Mullainathan","doi":"10.1145/3287560.3287593","DOIUrl":"https://doi.org/10.1145/3287560.3287593","url":null,"abstract":"A single algorithm drives an important health care decision for over 70 million people in the US. When health systems anticipate that a patient will have especially complex and intensive future health care needs, she is enrolled in a 'care management' program, which provides considerable additional resources: greater attention from trained providers and help with coordination of her care. To determine which patients will have complex future health care needs, and thus benefit from program enrollment, many systems rely on an algorithmically generated commercial risk score. In this paper, we exploit a rich dataset to study racial bias in a commercial algorithm that is deployed nationwide today in many of the US's most prominent Accountable Care Organizations (ACOs). We document significant racial bias in this widely used algorithm, using data on primary care patients at a large hospital. Blacks and whites with the same algorithmic risk scores have very different realized health. For example, the highest-risk black patients (those at the threshold where patients are auto-enrolled in the program), have significantly more chronic illnesses than white enrollees with the same risk score. We use detailed physiological data to show the pervasiveness of the bias: across a range of biomarkers, from HbA1c levels for diabetics to blood pressure control for hypertensives, we find significant racial health gaps conditional on risk score. This bias has significant material consequences for patients: it effectively means that white patients with the same health as black patients are far more likely be enrolled in the care management program, and benefit from its resources. If we simulated a world without this gap in predictions, blacks would be auto-enrolled into the program at more than double the current rate. An unusual aspect of our dataset is that we observe not just the risk scores but also the input data and objective function used to construct it. This provides a unique window into the mechanisms by which bias arises. The algorithm is given a data frame with (1) Yit (label), total medical expenditures ('costs') in year t; and (2) Xi,t--1 (features), fine-grained care utilization data in year t -- 1 (e.g., visits to cardiologists, number of x-rays, etc.). The algorithm's predicted risk of developing complex health needs is thus in fact predicted costs. And by this metric, one could easily call the algorithm unbiased: costs are very similar for black and white patients with the same risk scores. So far, this is inconsistent with algorithmic bias: conditional on risk score, predictions do not favor whites or blacks. The fundamental problem we uncover is that when thinking about 'health care needs,' hospitals and insurers focus on costs. They use an algorithm whose specific objective is cost prediction, and from this perspective, predictions are accurate and unbiased. Yet from the social perspective, actual health -- not just costs -- also matters. This is wh","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81160042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Fairness-Aware Programming Fairness-Aware编程
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287588
Aws Albarghouthi, Samuel Vinitsky
{"title":"Fairness-Aware Programming","authors":"Aws Albarghouthi, Samuel Vinitsky","doi":"10.1145/3287560.3287588","DOIUrl":"https://doi.org/10.1145/3287560.3287588","url":null,"abstract":"Increasingly, programming tasks involve automating and deploying sensitive decision-making processes that may have adverse impacts on individuals or groups of people. The issue of fairness in automated decision-making has thus become a major problem, attracting interdisciplinary attention. In this work, we aim to make fairness a first-class concern in programming. Specifically, we propose fairness-aware programming, where programmers can state fairness expectations natively in their code, and have a runtime system monitor decision-making and report violations of fairness. We present a rich and general specification language that allows a programmer to specify a range of fairness definitions from the literature, as well as others. As the decision-making program executes, the runtime maintains statistics on the decisions made and incrementally checks whether the fairness definitions have been violated, reporting such violations to the developer. The advantages of this approach are two fold: (i) Enabling declarative mathematical specifications of fairness in the programming language simplifies the process of checking fairness, as the programmer does not have to write ad hoc code for maintaining statistics. (ii) Compared to existing techniques for checking and ensuring fairness, our approach monitors a decision-making program in the wild, which may be running on a distribution that is unlike the dataset on which a classifier was trained and tested. We describe an implementation of our proposed methodology as a library in the Python programming language and illustrate its use on case studies from the algorithmic fairness literature.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84336804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media 从社交媒体推断心理健康状态的伦理紧张分类
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287587
Stevie Chancellor, M. Birnbaum, E. Caine, V. Silenzio, M. Choudhury
{"title":"A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media","authors":"Stevie Chancellor, M. Birnbaum, E. Caine, V. Silenzio, M. Choudhury","doi":"10.1145/3287560.3287587","DOIUrl":"https://doi.org/10.1145/3287560.3287587","url":null,"abstract":"Powered by machine learning techniques, social media provides an unobtrusive lens into individual behaviors, emotions, and psychological states. Recent research has successfully employed social media data to predict mental health states of individuals, ranging from the presence and severity of mental disorders like depression to the risk of suicide. These algorithmic inferences hold great potential in supporting early detection and treatment of mental disorders and in the design of interventions. At the same time, the outcomes of this research can pose great risks to individuals, such as issues of incorrect, opaque algorithmic predictions, involvement of bad or unaccountable actors, and potential biases from intentional or inadvertent misuse of insights. Amplifying these tensions, there are also divergent and sometimes inconsistent methodological gaps and under-explored ethics and privacy dimensions. This paper presents a taxonomy of these concerns and ethical challenges, drawing from existing literature, and poses questions to be resolved as this research gains traction. We identify three areas of tension: ethics committees and the gap of social media research; questions of validity, data, and machine learning; and implications of this research for key stakeholders. We conclude with calls to action to begin resolving these interdisciplinary dilemmas.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78852107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines "Good" and "Bad" Behavior 明确的制裁,模糊的奖励:中国社会信用体系目前如何定义“好”和“坏”行为
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287585
Severin Engelmann, Mo Chen, Felix A. Fischer, Ching-yu Kao, Jens Grossklags
{"title":"Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines \"Good\" and \"Bad\" Behavior","authors":"Severin Engelmann, Mo Chen, Felix A. Fischer, Ching-yu Kao, Jens Grossklags","doi":"10.1145/3287560.3287585","DOIUrl":"https://doi.org/10.1145/3287560.3287585","url":null,"abstract":"China's Social Credit System (SCS, 社会信用体系 or shehui xinyong tixi) is expected to become the first digitally-implemented nationwide scoring system with the purpose to rate the behavior of citizens, companies, and other entities. Thereby, in the SCS, \"good\" behavior can result in material rewards and reputational gain while \"bad\" behavior can lead to exclusion from material resources and reputational loss. Crucially, for the implementation of the SCS, society must be able to distinguish between behaviors that result in reward and those that lead to sanction. In this paper, we conduct the first transparency analysis of two central administrative information platforms of the SCS to understand how the SCS currently defines \"good\" and \"bad\" behavior. We analyze 194,829 behavioral records and 942 reports on citizens' behaviors published on the official Beijing SCS website and the national SCS platform \"Credit China\", respectively. By applying a mixed-method approach, we demonstrate that there is a considerable asymmetry between information provided by the so-called Redlist (information on \"good\" behavior) and the Blacklist (information on \"bad\" behavior). At the current stage of the SCS implementation, the majority of explanations on blacklisted behaviors includes a detailed description of the causal relation between inadequate behavior and its sanction. On the other hand, explanations on redlisted behavior, which comprise positive norms fostering value internalization and integration, are less transparent. Finally, this first SCS transparency analysis suggests that socio-technical systems applying a scoring mechanism might use different degrees of transparency to achieve particular behavioral engineering goals.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82159949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms 衡量重要的偏见:算法公平衡量的道德和偶然基础
Proceedings of the Conference on Fairness, Accountability, and Transparency Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287573
Bruce Glymour, J. Herington
{"title":"Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms","authors":"Bruce Glymour, J. Herington","doi":"10.1145/3287560.3287573","DOIUrl":"https://doi.org/10.1145/3287560.3287573","url":null,"abstract":"Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of \"procedural bias\" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of \"outcome bias\" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of \"behavior-relative error bias\" capture probabilistic dependence between class variables and the algorithmic score, conditional on target behaviors (e.g. recidivism or loan default). Fourth, measures of \"score-relative error bias\" capture probabilistic dependence between class variables and behavior, conditional on score. Several recent discussions have demonstrated a tradeoff between these different measures of algorithmic bias, and at least one recent paper has suggested conditions under which tradeoffs may be minimized. In this paper we use the machinery of causal graphical models to show that, under standard assumptions, the underlying causal relations among variables forces some tradeoffs. We delineate a number of normative considerations that are encoded in different measures of bias, with reference to the philosophical literature on the wrongfulness of disparate treatment and disparate impact. While both kinds of error bias are nominally motivated by concern to avoid disparate impact, we argue that consideration of causal structures shows that these measures are better understood as complicated and unreliable measures of procedural biases (i.e. disparate treatment). Moreover, while procedural bias is indicative of disparate treatment, we show that the measure of procedural bias one ought to adopt is dependent on the account of the wrongfulness of disparate treatment one endorses. Finally, given that neither score-relative nor behavior-relative measures of error bias capture the relevant normative considerations, we suggest that error bias proper is best measured by score-based measures of accuracy, such as the Brier score.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81879050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信