{"title":"Good Explanation for Algorithmic Transparency","authors":"Joy Lu, Dokyun Lee, Tae Wan Kim, D. Danks","doi":"10.2139/ssrn.3503603","DOIUrl":"https://doi.org/10.2139/ssrn.3503603","url":null,"abstract":"Machine learning algorithms have gained widespread usage across a variety of domains, both in providing predictions to expert users and recommending decisions to everyday users. However, these AI systems are often black boxes, and end-users are rarely provided with an explanation. The critical need for explanation by AI systems has led to calls for algorithmic transparency, including the \"right to explanation'' in the EU General Data Protection Regulation (GDPR). These initiatives presuppose that we know what constitutes a meaningful or good explanation, but there has actually been surprisingly little research on this question in the context of AI systems. In this paper, we (1) develop a generalizable framework grounded in philosophy, psychology, and interpretable machine learning to investigate and define characteristics of good explanation, and (2) conduct a large-scale lab experiment to measure the impact of different factors on people's perceptions of understanding, usage intention, and trust of AI systems. The framework and study together provide a concrete guide for managers on how to present algorithmic prediction rationales to end-users to foster trust and adoption, and elements of explanation and transparency to be considered by AI researchers and engineers in designing, developing, and deploying transparent or explainable algorithms.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87614012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodríguez, Tijl De Bie, Peter A. Flach
{"title":"FACE","authors":"Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodríguez, Tijl De Bie, Peter A. Flach","doi":"10.1145/3375627.3375850","DOIUrl":"https://doi.org/10.1145/3375627.3375850","url":null,"abstract":"Work in Counterfactual Explanations tends to focus on the principle of \"the closest possible world\" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals (e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a \"feasible path\" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these \"feasible paths\" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the \"feasible paths\" of change, which are achievable and can be tailored to the problem at hand.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84221123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Different \"Intelligibility\" for Different Folks","authors":"Yishan Zhou, D. Danks","doi":"10.1145/3375627.3375810","DOIUrl":"https://doi.org/10.1145/3375627.3375810","url":null,"abstract":"Many arguments have concluded that our autonomous technologies must be intelligible, interpretable, or explainable, even if that property comes at a performance cost. In this paper, we consider the reasons why some property like these might be valuable, we conclude that there is not simply one kind of 'intelligibility', but rather different types for different individuals and uses. In particular, different interests and goals require different types of intelligibility (or explanations, or other related notion). We thus provide a typography of 'intelligibility' that distinguishes various notions, and draw methodological conclusions about how autonomous technologies should be designed and deployed in different ways, depending on whose intelligibility is required.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"253 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73299998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social Contracts for Non-Cooperative Games","authors":"Alan Davoust, Michael Rovatsos","doi":"10.1145/3375627.3375829","DOIUrl":"https://doi.org/10.1145/3375627.3375829","url":null,"abstract":"In future agent societies, we might see AI systems engaging in selfish, calculated behavior, furthering their owners' interests instead of socially desirable outcomes. How can we promote morally sound behaviour in such settings, in order to obtain more desirable outcomes? A solution from moral philosophy is the concept of a social contract, a set of rules that people would voluntarily commit to in order to obtain better outcomes than those brought by anarchy. We adapt this concept to a game-theoretic setting, to systematically modify the payoffs of a non-cooperative game, so that agents will rationally pursue socially desirable outcomes. We show that for any game, a suitable social contract can be designed to produce an optimal outcome in terms of social welfare. We then investigate the limitations of applying this approach to alternative moral objectives, and establish that, for any alternative moral objective that is significantly different from social welfare, there are games for which no such social contract will be feasible that produces non-negligible social benefit compared to collective selfish behaviour.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91433640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Osonde A. Osoba, Benjamin Boudreaux, Douglas Yeung
{"title":"Steps Towards Value-Aligned Systems","authors":"Osonde A. Osoba, Benjamin Boudreaux, Douglas Yeung","doi":"10.1145/3375627.3375872","DOIUrl":"https://doi.org/10.1145/3375627.3375872","url":null,"abstract":"Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem. They are now indispensable tools that help us manage the flood of information we use to try to make effective decisions in a complex world. The current literature is full of examples of how individual artifacts violate societal norms and expectations (e.g. violations of fairness, privacy, or safety norms). Against this backdrop, this discussion highlights an under-emphasized perspective in the body of research focused on assessing value misalignment in AI-equipped sociotechnical systems. The research on value misalignment so far has a strong focus on the behavior of individual tech artifacts. This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems. We rely primarily on the research on fairness to make our arguments more concrete. And we use the opportunity to highlight how adopting a system perspective improves our ability to explain and address value misalignments better. Our discussion ends with an exploration of priority questions that demand attention if we are to assure the value alignment of whole systems, not just individual artifacts.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82808749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Deontic Logic for Programming Rightful Machines","authors":"A. T. Wright","doi":"10.1145/3375627.3375867","DOIUrl":"https://doi.org/10.1145/3375627.3375867","url":null,"abstract":"A \"rightful machine\" is an explicitly moral, autonomous machine agent whose behavior conforms to principles of justice and the positive public law of a legitimate state. In this paper, I set out some basic elements of a deontic logic appropriate for capturing conflicting legal obligations for purposes of programming rightful machines. Justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts but, instead, identify and expose them so that the rights and duties that generate inconsistencies in public law can be explicitly qualified and the conflicts resolved. I then argue that a credulous, non-monotonic deontic logic can describe inconsistent legal obligations while meeting the normative demand for consistency in the prescriptive system of public law. I propose an implementation of this logic via a modified form of \"answer set programming,\" which I demonstrate with some simple examples.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82236361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint Optimization of AI Fairness and Utility: A Human-Centered Approach","authors":"Yunfeng Zhang, R. Bellamy, Kush R. Varshney","doi":"10.1145/3375627.3375862","DOIUrl":"https://doi.org/10.1145/3375627.3375862","url":null,"abstract":"Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research community has proposed many methods to measure and mitigate unwanted biases, but few of them involve inputs from human policy makers. We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, and because achieving fairness often requires sacrificing other objectives such as model accuracy, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives. In this paper, we propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85914220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deepfakes for Medical Video De-Identification: Privacy Protection and Diagnostic Information Preservation","authors":"Bingquan Zhu, Hao Fang, Yanan Sui, Luming Li","doi":"10.1145/3375627.3375849","DOIUrl":"https://doi.org/10.1145/3375627.3375849","url":null,"abstract":"Data sharing for medical research has been difficult as open-sourcing clinical data may violate patient privacy. Traditional methods for face de-identification wipe out facial information entirely, making it impossible to analyze facial behavior. Recent advancements on whole-body keypoints detection also rely on facial input to estimate body keypoints. Both facial and body keypoints are critical in some medical diagnoses, and keypoints invariability after de-identification is of great importance. Here, we propose a solution using deepfake technology, the face swapping technique. While this swapping method has been criticized for invading privacy and portraiture right, it could conversely protect privacy in medical video: patients' faces could be swapped to a proper target face and become unrecognizable. However, it remained an open question that to what extent the swapping de-identification method could affect the automatic detection of body keypoints. In this study, we apply deepfake technology to Parkinson's disease examination videos to de-identify subjects, and quantitatively show that: face-swapping as a de-identification approach is reliable, and it keeps the keypoints almost invariant, significantly better than traditional methods. This study proposes a pipeline for video de-identification and keypoint preservation, clearing up some ethical restrictions for medical data sharing. This work could make open-source high quality medical video datasets more feasible and promote future medical research that benefits our society.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80840774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor
{"title":"Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning","authors":"A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor","doi":"10.1145/3375627.3375866","DOIUrl":"https://doi.org/10.1145/3375627.3375866","url":null,"abstract":"In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of em justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed wide adoption of machine learning? We argue that, in general, people implicitly adoptreliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method citegoldman2012reliabilism. We argue that, in cases where model deployments require em moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral \"wrapper'' around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification---moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78829833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices","authors":"Gunay Kazimzade, Milagros Miceli","doi":"10.1145/3375627.3375809","DOIUrl":"https://doi.org/10.1145/3375627.3375809","url":null,"abstract":"In this paper, we analyze the relation between data-related biases and practices of data annotation, by placing them in the context of market economy. We understand annotation as a praxis related to the sensemaking of data and investigate annotation practices for vision models by focusing on the values that are prioritized by industrial decision-makers and practitioners. The quality of data is critical for machine learning models as it holds the power to (mis-)represent the population it is intended to analyze. For autonomous systems to be able to make sense of the world, humans first need to make sense of the data these systems will be trained on. This paper addresses this issue, guided by the following research questions: Which goals are prioritized by decision-makers at the data annotation stage? How do these priorities correlate with data-related bias issues? Focusing on work practices and their context, our research goal aims at understanding the logics driving companies and their impact on the performed annotations. The study follows a qualitative design and is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for vision models in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field and directly shape the annotations performed: profit (short deadlines connected to the strive for profit are prioritized over alternative approaches that could prevent biased outcomes), standardization (the strive for standardized and, in many cases, reductive or biased annotations to make data fit the products and revenue plans of clients), and opacity (related to client's power to impose their criteria on the annotations that are performed. Criteria that most of the times remain opaque due to corporate confidentiality). Finally, we introduce three elements, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes: transparency (regarding the documentation of data transformations, including information on responsibilities and criteria for decision-making.), education (training on the potential harms caused by AI and its ethical implications, that could help data annotators and related roles adopt a more critical approach towards the interpretation and labeling of data), and regulations (clear guidelines for ethical AI developed at the governmental level and applied both in private and public organizations).","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"449 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77854278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}