Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
Fair Transfer Learning with Missing Protected Attributes 缺少保护属性的公平迁移学习
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314236
Amanda Coston, K. Ramamurthy, Dennis Wei, Kush R. Varshney, S. Speakman, Zairah Mustahsan, Supriyo Chakraborty
{"title":"Fair Transfer Learning with Missing Protected Attributes","authors":"Amanda Coston, K. Ramamurthy, Dennis Wei, Kush R. Varshney, S. Speakman, Zairah Mustahsan, Supriyo Chakraborty","doi":"10.1145/3306618.3314236","DOIUrl":"https://doi.org/10.1145/3306618.3314236","url":null,"abstract":"Risk assessment is a growing use for machine learning models. When used in high-stakes applications, especially ones regulated by anti-discrimination laws or governed by societal norms for fairness, it is important to ensure that learned models do not propagate and scale any biases that may exist in training data. In this paper, we add on an additional challenge beyond fairness: unsupervised domain adaptation to covariate shift between a source and target distribution. Motivated by the real-world problem of risk assessment in new markets for health insurance in the United States and mobile money-based loans in East Africa, we provide a precise formulation of the machine learning with covariate shift and score parity problem. Our formulation focuses on situations in which protected attributes are not available in either the source or target domain. We propose two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariate shift (TFCS) which does not require protected attributes in the source domain. We empirically demonstrate their efficacy in two applications.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128315612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Modeling Risk and Achieving Algorithmic Fairness Using Potential Outcomes 使用潜在结果建模风险和实现算法公平性
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314323
Alan Mishler
{"title":"Modeling Risk and Achieving Algorithmic Fairness Using Potential Outcomes","authors":"Alan Mishler","doi":"10.1145/3306618.3314323","DOIUrl":"https://doi.org/10.1145/3306618.3314323","url":null,"abstract":"Predictive models and algorithms are increasingly used to support human decision makers, raising concerns about how to ensure that these algorithms are fair. Additionally, these tools are generally designed to predict observable outcomes, but this is problematic when the treatment or exposure is confounded with the outcome. I argue that in most cases, what is actually of interest are potential outcomes. I contrast modeling approaches built around observable vs. potential outcomes, and I recharacterize error rate-based algorithmic fairness metrics in terms of potential outcomes. I also aim to formally model the consequences of using confounded observable predictions to drive interventions.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Requirements for an Artificial Agent with Norm Competence 具有规范能力的人工智能体的要求
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314252
B. Malle, P. Bello, Matthias Scheutz
{"title":"Requirements for an Artificial Agent with Norm Competence","authors":"B. Malle, P. Bello, Matthias Scheutz","doi":"10.1145/3306618.3314252","DOIUrl":"https://doi.org/10.1145/3306618.3314252","url":null,"abstract":"Human behavior is frequently guided by social and moral norms, and no human community can exist without norms. Robots that enter human societies must therefore behave in norm-conforming ways as well. However, currently there is no solid cognitive or computational model available of how human norms are represented, activated, and learned. We provide a conceptual and psychological analysis of key properties of human norms and identify the demands these properties put on any artificial agent that incorporates norms-demands on the format of norm representations, their structured organization, and their learning algorithms.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity 情感检测人工智能系统在算法性能和数据集多样性方面的比较分析
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314284
De'Aira G. Bryant, A. Howard
{"title":"A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity","authors":"De'Aira G. Bryant, A. Howard","doi":"10.1145/3306618.3314284","DOIUrl":"https://doi.org/10.1145/3306618.3314284","url":null,"abstract":"In recent news, organizations have been considering the use of facial and emotion recognition for applications involving youth such as tackling surveillance and security in schools. However, the majority of efforts on facial emotion recognition research have focused on adults. Children, particularly in their early years, have been shown to express emotions quite differently than adults. Thus, before such algorithms are deployed in environments that impact the wellbeing and circumstance of youth, a careful examination should be made on their accuracy with respect to appropriateness for this target demographic. In this work, we utilize several datasets that contain facial expressions of children linked to their emotional state to evaluate eight different commercial emotion classification systems. We compare the ground truth labels provided by the respective datasets to the labels given with the highest confidence by the classification systems and assess the results in terms of matching score (TPR), positive predictive value, and failure to compute rate. Overall results show that the emotion recognition systems displayed subpar performance on the datasets of children's expressions compared to prior work with adult datasets and initial human ratings. We then identify limitations associated with automated recognition of emotions in children and provide suggestions on directions with enhancing recognition accuracy through data diversification, dataset accountability, and algorithmic regulation.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Faithful and Customizable Explanations of Black Box Models 黑匣子模型的忠实和可定制的解释
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314229
Himabindu Lakkaraju, Ece Kamar, R. Caruana, J. Leskovec
{"title":"Faithful and Customizable Explanations of Black Box Models","authors":"Himabindu Lakkaraju, Ece Kamar, R. Caruana, J. Leskovec","doi":"10.1145/3306618.3314229","DOIUrl":"https://doi.org/10.1145/3306618.3314229","url":null,"abstract":"As predictive models increasingly assist human experts (e.g., doctors) in day-to-day decision making, it is crucial for experts to be able to explore and understand how such models behave in different feature subspaces in order to know if and when to trust them. To this end, we propose Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest. Our framework provides end users (e.g., doctors) with the flexibility of customizing the model explanations by allowing them to input the features of interest. The construction of explanations is guided by a novel objective function that we propose to simultaneously optimize for fidelity to the original model, unambiguity and interpretability of the explanation. More specifically, our objective allows us to learn, with optimality guarantees, a small number of compact decision sets each of which captures the behavior of a given black box model in unambiguous, well-defined regions of the feature space. Experimental evaluation with real-world datasets and user studies demonstrate that our approach can generate customizable, highly compact, easy-to-understand, yet accurate explanations of various kinds of predictive models compared to state-of-the-art baselines.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"108 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131435845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 224
Machine Learning in Legal Practice: Notes from Recent History 法律实践中的机器学习:来自近代史的注释
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314324
Fernando A. Delgado
{"title":"Machine Learning in Legal Practice: Notes from Recent History","authors":"Fernando A. Delgado","doi":"10.1145/3306618.3314324","DOIUrl":"https://doi.org/10.1145/3306618.3314324","url":null,"abstract":"Often framed as a relatively new and controversial phenomenon, the application of machine learning (ML) techniques to legal analysis and decision-making in the US justice system has a rich yet under examined history. My research examines how ML came to be adopted as a standard tool for automating fact discovery for high-stakes civil litigation. By analyzing the key controversies and consensuses that emerge during the experimentation and early adoption phase of this technology (2008-2015), a useful case study presents itself in which an expert professional field wrestled with the challenges of integrating ML into sensitive decision-making workflows.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Technological Advances Can Reveal Rights 技术进步如何揭示权利
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314274
Jack Parker, D. Danks
{"title":"How Technological Advances Can Reveal Rights","authors":"Jack Parker, D. Danks","doi":"10.1145/3306618.3314274","DOIUrl":"https://doi.org/10.1145/3306618.3314274","url":null,"abstract":"Over recent decades, technological development has been accompanied by the proposal of new rights by various groups and individuals: the right to public anonymity, the right to be forgotten, and the right to disconnect, for example. Although there is widespread acknowledgment of the motivation behind these proposed rights, there is little agreement about their actual normative status. One potential challenge is that the claims only arise in contingent social-technical contexts, which may affect how we conceive of them ethically (albeit, not necessarily in terms of policy). What sort of morally legitimate rights claims depend on such contingencies? Our paper investigates the grounds on which such proposals might be considered \"actual\" rights. The full paper can be found at http://www.andrew.cmu.edu/user/cgparker/Parker_Danks_RevealedRights.pdf. We propose the notion of a revealed right, a right that only imposes duties -- and thus is only meaningfully revealed -- in certain technological contexts. Our framework is based on an interest theory approach to rights, which understands rights in terms of a justificatory role: morally important aspects of a person's well-being (interests) ground rights, which then justify holding someone to a duty that promotes or protects that interest. Our framework uses this approach to interpret the conflicts that lead to revealed rights in terms of how technological developments cause shifts in the balance of power to promote particular interests. Different parties can have competing or conflicting interests. It is also generally accepted that some interests are more normatively important than others (even if only within a particular framework). We can refer to this difference in importance by saying that the former interest has less \"moral weight\" than the latter interest (in that context). The moral weight of an interest is connected to its contribution to the interest-holder's overall well-being, and thereby determines the strength of the reason that a corresponding right provides to justify a duty. Improved technology can offer resources that grant one party increased causal power to realize its interests to the detriment of another's capacity to do so, even while the relative moral weight of their interests remain the same. Such changes in circumstance can make the importance of protecting a particular interest newly salient. If that interest's moral weight justifies establishing a duty to protect it, thereby limiting the threat posed by the new socio-technical context, then a right is revealed. Revealed rights justify realignment between the moral weight and causal power orderings so that people with weightier interests have greater power to protect those interests. In the extended paper, we show how this account can be applied to the interpretation of two recently proposed \"rights\": the right to be forgotten, and the right to disconnect. Since we are focused on making sense of revealed rights, not any particul","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions 在现实假设下,均等的几率意味着部分均等的结果
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314290
D. McNamara
{"title":"Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions","authors":"D. McNamara","doi":"10.1145/3306618.3314290","DOIUrl":"https://doi.org/10.1145/3306618.3314290","url":null,"abstract":"Equalized odds -- where the true positive rates and false positive rates are equal across groups (e.g. racial groups) -- is a common quantitative measure of fairness. Equalized outcomes -- where the difference in predicted outcomes between groups is less than the difference observed in the training data -- is more contentious, because it is incompatible with perfectly accurate predictions. We formalize and quantify the relationship between these two important but seemingly distinct notions of fairness. We show that under realistic assumptions, equalized odds implies partially equalized outcomes. We prove a comparable result for approximately equalized odds. In addition, we generalize a well-known previous result about the incompatibility of equalized odds and another definition of fairness known as calibration, by showing that partially equalized outcomes implies non-calibration. Our results highlight the risks of using trends observed across groups to make predictions about individuals.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132412614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Rightful Machines and Dilemmas 正当机器和困境
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314261
A. T. Wright
{"title":"Rightful Machines and Dilemmas","authors":"A. T. Wright","doi":"10.1145/3306618.3314261","DOIUrl":"https://doi.org/10.1145/3306618.3314261","url":null,"abstract":"Tn this paper I set out a new Kantian approach to resolving conflicts and dilemmas of obligation for semi-autonomous machine agents such as self-driving cars. First, I argue that efforts to build explicitly moral machine agents should focus on what Kant refers to as duties of right, or justice, rather than on duties of virtue, or ethics. In a society where everyone is morally equal, no one individual or group has the normative authority to unilaterally decide how moral conflicts should be resolved for everyone. Only public institutions to which everyone could consent have the authority to define, enforce, and adjudicate our rights and obligations with respect to one other. Then, I show how the shift from ethics to a standard of justice resolves the conflict of obligations in what is known as the \"trolley problem\" for rightful machine agents. Finally, I consider how a deontic logic suitable for governing explicitly rightful machines might meet the normative requirements of justice.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128736454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Formal Models of Blameworthiness 走向应受谴责的正式模式
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314321
Meir Friedenberg
{"title":"Towards Formal Models of Blameworthiness","authors":"Meir Friedenberg","doi":"10.1145/3306618.3314321","DOIUrl":"https://doi.org/10.1145/3306618.3314321","url":null,"abstract":"As we move towards an era in which autonomous systems are ubiquitous, being able to reason formally about moral responsibility for outcomes will become more and more critical. My research has focused on formalizing notions of blameworthiness and responsibility. I summarize here some work by myself and others towards this end and also discuss interesting directions for future work.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信