Amanda Coston, K. Ramamurthy, Dennis Wei, Kush R. Varshney, S. Speakman, Zairah Mustahsan, Supriyo Chakraborty
{"title":"Fair Transfer Learning with Missing Protected Attributes","authors":"Amanda Coston, K. Ramamurthy, Dennis Wei, Kush R. Varshney, S. Speakman, Zairah Mustahsan, Supriyo Chakraborty","doi":"10.1145/3306618.3314236","DOIUrl":"https://doi.org/10.1145/3306618.3314236","url":null,"abstract":"Risk assessment is a growing use for machine learning models. When used in high-stakes applications, especially ones regulated by anti-discrimination laws or governed by societal norms for fairness, it is important to ensure that learned models do not propagate and scale any biases that may exist in training data. In this paper, we add on an additional challenge beyond fairness: unsupervised domain adaptation to covariate shift between a source and target distribution. Motivated by the real-world problem of risk assessment in new markets for health insurance in the United States and mobile money-based loans in East Africa, we provide a precise formulation of the machine learning with covariate shift and score parity problem. Our formulation focuses on situations in which protected attributes are not available in either the source or target domain. We propose two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariate shift (TFCS) which does not require protected attributes in the source domain. We empirically demonstrate their efficacy in two applications.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128315612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Risk and Achieving Algorithmic Fairness Using Potential Outcomes","authors":"Alan Mishler","doi":"10.1145/3306618.3314323","DOIUrl":"https://doi.org/10.1145/3306618.3314323","url":null,"abstract":"Predictive models and algorithms are increasingly used to support human decision makers, raising concerns about how to ensure that these algorithms are fair. Additionally, these tools are generally designed to predict observable outcomes, but this is problematic when the treatment or exposure is confounded with the outcome. I argue that in most cases, what is actually of interest are potential outcomes. I contrast modeling approaches built around observable vs. potential outcomes, and I recharacterize error rate-based algorithmic fairness metrics in terms of potential outcomes. I also aim to formally model the consequences of using confounded observable predictions to drive interventions.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Requirements for an Artificial Agent with Norm Competence","authors":"B. Malle, P. Bello, Matthias Scheutz","doi":"10.1145/3306618.3314252","DOIUrl":"https://doi.org/10.1145/3306618.3314252","url":null,"abstract":"Human behavior is frequently guided by social and moral norms, and no human community can exist without norms. Robots that enter human societies must therefore behave in norm-conforming ways as well. However, currently there is no solid cognitive or computational model available of how human norms are represented, activated, and learned. We provide a conceptual and psychological analysis of key properties of human norms and identify the demands these properties put on any artificial agent that incorporates norms-demands on the format of norm representations, their structured organization, and their learning algorithms.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity","authors":"De'Aira G. Bryant, A. Howard","doi":"10.1145/3306618.3314284","DOIUrl":"https://doi.org/10.1145/3306618.3314284","url":null,"abstract":"In recent news, organizations have been considering the use of facial and emotion recognition for applications involving youth such as tackling surveillance and security in schools. However, the majority of efforts on facial emotion recognition research have focused on adults. Children, particularly in their early years, have been shown to express emotions quite differently than adults. Thus, before such algorithms are deployed in environments that impact the wellbeing and circumstance of youth, a careful examination should be made on their accuracy with respect to appropriateness for this target demographic. In this work, we utilize several datasets that contain facial expressions of children linked to their emotional state to evaluate eight different commercial emotion classification systems. We compare the ground truth labels provided by the respective datasets to the labels given with the highest confidence by the classification systems and assess the results in terms of matching score (TPR), positive predictive value, and failure to compute rate. Overall results show that the emotion recognition systems displayed subpar performance on the datasets of children's expressions compared to prior work with adult datasets and initial human ratings. We then identify limitations associated with automated recognition of emotions in children and provide suggestions on directions with enhancing recognition accuracy through data diversification, dataset accountability, and algorithmic regulation.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Himabindu Lakkaraju, Ece Kamar, R. Caruana, J. Leskovec
{"title":"Faithful and Customizable Explanations of Black Box Models","authors":"Himabindu Lakkaraju, Ece Kamar, R. Caruana, J. Leskovec","doi":"10.1145/3306618.3314229","DOIUrl":"https://doi.org/10.1145/3306618.3314229","url":null,"abstract":"As predictive models increasingly assist human experts (e.g., doctors) in day-to-day decision making, it is crucial for experts to be able to explore and understand how such models behave in different feature subspaces in order to know if and when to trust them. To this end, we propose Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest. Our framework provides end users (e.g., doctors) with the flexibility of customizing the model explanations by allowing them to input the features of interest. The construction of explanations is guided by a novel objective function that we propose to simultaneously optimize for fidelity to the original model, unambiguity and interpretability of the explanation. More specifically, our objective allows us to learn, with optimality guarantees, a small number of compact decision sets each of which captures the behavior of a given black box model in unambiguous, well-defined regions of the feature space. Experimental evaluation with real-world datasets and user studies demonstrate that our approach can generate customizable, highly compact, easy-to-understand, yet accurate explanations of various kinds of predictive models compared to state-of-the-art baselines.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"108 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131435845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning in Legal Practice: Notes from Recent History","authors":"Fernando A. Delgado","doi":"10.1145/3306618.3314324","DOIUrl":"https://doi.org/10.1145/3306618.3314324","url":null,"abstract":"Often framed as a relatively new and controversial phenomenon, the application of machine learning (ML) techniques to legal analysis and decision-making in the US justice system has a rich yet under examined history. My research examines how ML came to be adopted as a standard tool for automating fact discovery for high-stakes civil litigation. By analyzing the key controversies and consensuses that emerge during the experimentation and early adoption phase of this technology (2008-2015), a useful case study presents itself in which an expert professional field wrestled with the challenges of integrating ML into sensitive decision-making workflows.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Technological Advances Can Reveal Rights","authors":"Jack Parker, D. Danks","doi":"10.1145/3306618.3314274","DOIUrl":"https://doi.org/10.1145/3306618.3314274","url":null,"abstract":"Over recent decades, technological development has been accompanied by the proposal of new rights by various groups and individuals: the right to public anonymity, the right to be forgotten, and the right to disconnect, for example. Although there is widespread acknowledgment of the motivation behind these proposed rights, there is little agreement about their actual normative status. One potential challenge is that the claims only arise in contingent social-technical contexts, which may affect how we conceive of them ethically (albeit, not necessarily in terms of policy). What sort of morally legitimate rights claims depend on such contingencies? Our paper investigates the grounds on which such proposals might be considered \"actual\" rights. The full paper can be found at http://www.andrew.cmu.edu/user/cgparker/Parker_Danks_RevealedRights.pdf. We propose the notion of a revealed right, a right that only imposes duties -- and thus is only meaningfully revealed -- in certain technological contexts. Our framework is based on an interest theory approach to rights, which understands rights in terms of a justificatory role: morally important aspects of a person's well-being (interests) ground rights, which then justify holding someone to a duty that promotes or protects that interest. Our framework uses this approach to interpret the conflicts that lead to revealed rights in terms of how technological developments cause shifts in the balance of power to promote particular interests. Different parties can have competing or conflicting interests. It is also generally accepted that some interests are more normatively important than others (even if only within a particular framework). We can refer to this difference in importance by saying that the former interest has less \"moral weight\" than the latter interest (in that context). The moral weight of an interest is connected to its contribution to the interest-holder's overall well-being, and thereby determines the strength of the reason that a corresponding right provides to justify a duty. Improved technology can offer resources that grant one party increased causal power to realize its interests to the detriment of another's capacity to do so, even while the relative moral weight of their interests remain the same. Such changes in circumstance can make the importance of protecting a particular interest newly salient. If that interest's moral weight justifies establishing a duty to protect it, thereby limiting the threat posed by the new socio-technical context, then a right is revealed. Revealed rights justify realignment between the moral weight and causal power orderings so that people with weightier interests have greater power to protect those interests. In the extended paper, we show how this account can be applied to the interpretation of two recently proposed \"rights\": the right to be forgotten, and the right to disconnect. Since we are focused on making sense of revealed rights, not any particul","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions","authors":"D. McNamara","doi":"10.1145/3306618.3314290","DOIUrl":"https://doi.org/10.1145/3306618.3314290","url":null,"abstract":"Equalized odds -- where the true positive rates and false positive rates are equal across groups (e.g. racial groups) -- is a common quantitative measure of fairness. Equalized outcomes -- where the difference in predicted outcomes between groups is less than the difference observed in the training data -- is more contentious, because it is incompatible with perfectly accurate predictions. We formalize and quantify the relationship between these two important but seemingly distinct notions of fairness. We show that under realistic assumptions, equalized odds implies partially equalized outcomes. We prove a comparable result for approximately equalized odds. In addition, we generalize a well-known previous result about the incompatibility of equalized odds and another definition of fairness known as calibration, by showing that partially equalized outcomes implies non-calibration. Our results highlight the risks of using trends observed across groups to make predictions about individuals.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132412614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rightful Machines and Dilemmas","authors":"A. T. Wright","doi":"10.1145/3306618.3314261","DOIUrl":"https://doi.org/10.1145/3306618.3314261","url":null,"abstract":"Tn this paper I set out a new Kantian approach to resolving conflicts and dilemmas of obligation for semi-autonomous machine agents such as self-driving cars. First, I argue that efforts to build explicitly moral machine agents should focus on what Kant refers to as duties of right, or justice, rather than on duties of virtue, or ethics. In a society where everyone is morally equal, no one individual or group has the normative authority to unilaterally decide how moral conflicts should be resolved for everyone. Only public institutions to which everyone could consent have the authority to define, enforce, and adjudicate our rights and obligations with respect to one other. Then, I show how the shift from ethics to a standard of justice resolves the conflict of obligations in what is known as the \"trolley problem\" for rightful machine agents. Finally, I consider how a deontic logic suitable for governing explicitly rightful machines might meet the normative requirements of justice.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128736454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Formal Models of Blameworthiness","authors":"Meir Friedenberg","doi":"10.1145/3306618.3314321","DOIUrl":"https://doi.org/10.1145/3306618.3314321","url":null,"abstract":"As we move towards an era in which autonomous systems are ubiquitous, being able to reason formally about moral responsibility for outcomes will become more and more critical. My research has focused on formalizing notions of blameworthiness and responsibility. I summarize here some work by myself and others towards this end and also discuss interesting directions for future work.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}