{"title":"Modelling and Influencing the AI Bidding War: A Research Agenda","authors":"H. Anh, L. Pereira, T. Lenaerts","doi":"10.1145/3306618.3314265","DOIUrl":"https://doi.org/10.1145/3306618.3314265","url":null,"abstract":"A race for technological supremacy in AI could lead to serious negative consequences, especially whenever ethical and safety procedures are underestimated or even ignored, leading potentially to the rejection of AI in general. For all to enjoy the benefits provided by safe, ethical and trustworthy AI systems, it is crucial to incentivise participants with appropriate strategies that ensure mutually beneficial normative behaviour and safety-compliance from all parties involved. Little attention has been given to understanding the dynamics and emergent behaviours arising from this AI bidding war, and moreover, how to influence it to achieve certain desirable outcomes (e.g. AI for public good and participant compliance). To bridge this gap, this paper proposes a research agenda to develop theoretical models that capture key factors of the AI race, revealing which strategic behaviours may emerge and hypothetical scenarios therein. Strategies from incentive and agreement modelling are directly applicable to systematically analyse how different types of incentives (namely, positive vs. negative, peer vs. institutional, and their combinations) influence safety-compliant behaviours over time, and how such behaviours should be configured to ensure desired global outcomes, studying at the same time how these mechanisms influence AI development. This agenda will provide actionable policies, showing how they need to be employed and deployed in order to achieve compliance and thereby avoid disasters as well as loosing confidence and trust in AI in general.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeanna Neefe Matthews, M. Babaeianjelodar, Stephen Lorenz, Abigail V. Matthews, Mariama Njie, Nathaniel Adams, D. Krane, Jessica Goldthwaite, Clinton Hughes
{"title":"The Right To Confront Your Accusers: Opening the Black Box of Forensic DNA Software","authors":"Jeanna Neefe Matthews, M. Babaeianjelodar, Stephen Lorenz, Abigail V. Matthews, Mariama Njie, Nathaniel Adams, D. Krane, Jessica Goldthwaite, Clinton Hughes","doi":"10.1145/3306618.3314279","DOIUrl":"https://doi.org/10.1145/3306618.3314279","url":null,"abstract":"The results of forensic DNA software systems are regularly introduced as compelling evidence in criminal trials, but requests by defendants to evaluate how these results are generated are often denied. Furthermore, there is mounting evidence of problems such as failures to disclose substantial changes in methodology to oversight bodies and substantial differences in the results generated by different software systems. In a society that purports to guarantee defendants the right to face their accusers and confront the evidence against them, what then is the role of black-box forensic software systems in moral decision making in criminal justice? In this paper, we examine the case of the Forensic Statistical Tool (FST), a forensic DNA system developed in 2010 by New York City's Office of Chief Medical Examiner (OCME). For over 5 years, expert witness review requested by defense teams was denied, even under protective order, while the system was used in over 1300 criminal cases. When the first expert review was finally permitted in 2016, many problems were identified including an undisclosed function capable of dropping evidence that could be beneficial to the defense. Overall, the findings were so substantial that a motion to release the full source code of FST publicly was granted. In this paper, we quantify the impact of this undisclosed function on samples from OCME's own validation study and discuss the potential impact on individual defendants. Specifically, we find that 104 of the 439 samples (23.7%) triggered the undisclosed data-dropping behavior and that the change skewed results toward false inclusion for individuals whose DNA was not present in an evidence sample. Beyond this, we consider what changes in the criminal justice system could prevent problems like this from going unresolved in the future.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123884871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ray Jiang, S. Chiappa, Tor Lattimore, A. György, Pushmeet Kohli
{"title":"Degenerate Feedback Loops in Recommender Systems","authors":"Ray Jiang, S. Chiappa, Tor Lattimore, A. György, Pushmeet Kohli","doi":"10.1145/3306618.3314288","DOIUrl":"https://doi.org/10.1145/3306618.3314288","url":null,"abstract":"Machine learning is used extensively in recommender systems deployed in products. The decisions made by these systems can influence user beliefs and preferences which in turn affect the feedback the learning system receives - thus creating a feedback loop. This phenomenon can give rise to the so-called \"echo chambers\" or \"filter bubbles\" that have user and societal implications. In this paper, we provide a novel theoretical analysis that examines both the role of user dynamics and the behavior of recommender systems, disentangling the echo chamber from the filter bubble effect. In addition, we offer practical solutions to slow down system degeneracy. Our study contributes toward understanding and developing solutions to commonly cited issues in the complex temporal scenario, an area that is still largely unexplored.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122599705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AIES 2019 Student Submission","authors":"Elija Perrier","doi":"10.1145/3306618.3314318","DOIUrl":"https://doi.org/10.1145/3306618.3314318","url":null,"abstract":"In this abstract, I intend to outline a number of concurrent multidisciplinary research programmes in which I am engaged. Firstly, I will briefly outline my current PhD research in quantum machine learning and its connections to philosophical and logical research, especially that informed by category theory. Secondly, I will detail my research regarding the role of computational complexity in frameworks for ethical artificial intelligence and algorithmic governance, drawing upon cross-disciplinary application of my computational science, physical science, economic science and legal research backgrounds.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"327 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122635768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Formal Approach to Explainability","authors":"Lior Wolf, Tomer Galanti, Tamir Hazan","doi":"10.1145/3306618.3314260","DOIUrl":"https://doi.org/10.1145/3306618.3314260","url":null,"abstract":"We regard explanations as a blending of the input sample and the model's output and offer a few definitions that capture various desired properties of the function that generates these explanations. We study the links between these properties and between explanation-generating functions and intermediate representations of learned models and are able to show, for example, that if the activations of a given layer are consistent with an explanation, then so do all other subsequent layers. In addition, we study the intersection and union of explanations as a way to construct new explanations.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121240755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How We Talk About AI (and Why It Matters)","authors":"Ryan Calo","doi":"10.1145/3306618.3314225","DOIUrl":"https://doi.org/10.1145/3306618.3314225","url":null,"abstract":"How we talk about artificial intelligence matters. Not only do our rhetorical choices influence public expectations of AI, they implicitly make the case for or against specific government interventions. Conceiving of AI as a global project to which each nation can contribute, for instance, suggests a different course of action than understanding AI as a \"race\" America cannot afford to lose. And just as inflammatory terms such as \"killer robot\" aim to catalyze limitations of autonomous weapons, so do the popular terms \"ethics\" and \"governance\" subtly argue for a lesser role for government in setting AI policy. How should we talk about AI? And what's at stake with our rhetorical choices? This presentation explores the interplay between claims about AI and law's capacity to channel AI in the public interest.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Killer Robots and Human Dignity","authors":"Daniel Lim","doi":"10.1145/3306618.3314291","DOIUrl":"https://doi.org/10.1145/3306618.3314291","url":null,"abstract":"Lethal Autonomous Weapon Systems (LAWS) have become the center of an internationally relevant ethical debate. Deontological arguments based on putative legal compliance failures and the creation of accountability gaps along with wide consequentialist arguments based on factors like the ease of engaging in wars have been leveraged by a number of different states and organizations to try and reach global consensus on a ban of LAWS. This paper will focus on one strand of deontological arguments-ones based on human dignity. Merely asserting that LAWS pose a threat to human dignity would be question begging. Independent evidence based on a morally relevant distinction between humans and LAWS is needed. There are at least four reasons to think that the capacity for emotion cannot be a morally relevant distinction. First, if the concept of human dignity is given a subjective definition, whether or not lethal force is administered by humans or LAWS seems to be irrelevant. Second, it is far from clear that human combatants either have the relevant capacity for emotion or that the capacity is exercised in the relevant circumstances. Third, the capacity for emotion can actually be an impediment to the exercising of a combatant's ability to treat an enemy respectfully. Fourth, there is strong inductive evidence to believe that any capacity, when sufficiently well described, can be carried out by artificially intelligent programs.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Ibrahim, Melissa Louie, C. Modarres, J. Paisley
{"title":"Global Explanations of Neural Networks: Mapping the Landscape of Predictions","authors":"Mark Ibrahim, Melissa Louie, C. Modarres, J. Paisley","doi":"10.1145/3306618.3314230","DOIUrl":"https://doi.org/10.1145/3306618.3314230","url":null,"abstract":"A barrier to the wider adoption of neural networks is their lack of interpretability. While local explanation methods exist for one prediction, most global attributions still reduce neural network decisions to a single set of features. In response, we present an approach for generating global attributions called GAM, which explains the landscape of neural network predictions across subpopulations. GAM augments global explanations with the proportion of samples that each attribution best explains and specifies which samples are described by each attribution. Global explanations also have tunable granularity to detect more or fewer subpopulations. We demonstrate that GAM's global explanations 1) yield the known feature importances of simulated data, 2) match feature weights of interpretable statistical models on real data, and 3) are intuitive to practitioners through user studies. With more transparent predictions, GAM can help ensure neural network decisions are generated for the right reasons.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Theories of Parenting and Their Application to Artificial Intelligence","authors":"S. Croeser, P. Eckersley","doi":"10.1145/3306618.3314231","DOIUrl":"https://doi.org/10.1145/3306618.3314231","url":null,"abstract":"As machine learning (ML) systems have advanced, they have acquired more power over humans' lives, and questions about what values are embedded in them have become more complex and fraught. It is conceivable that in the coming decades, humans may succeed in creating artificial general intelligence (AGI) that thinks and acts with an open-endedness and autonomy comparable to that of humans. The implications would be profound for our species; they are now widely debated not just in science fiction and speculative research agendas but increasingly in serious technical and policy conversations. Much work is underway to try to weave ethics into advancing ML research. We think it useful to add the lens of parenting to these efforts, and specifically radical, queer theories of parenting that consciously set out to nurture agents whose experiences, objectives and understanding of the world will necessarily be very different from their parents'. We propose a spectrum of principles which might underpin such an effort; some are relevant to current ML research, while others will become more important if AGI becomes more likely. These principles may encourage new thinking about the development, design, training, and release into the world of increasingly autonomous agents.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123857041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiwali Mohan, Frances Yan, V. Bellotti, A. Elbery, Hesham A Rakha, M. Klenk
{"title":"On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population","authors":"Shiwali Mohan, Frances Yan, V. Bellotti, A. Elbery, Hesham A Rakha, M. Klenk","doi":"10.1145/3306618.3314271","DOIUrl":"https://doi.org/10.1145/3306618.3314271","url":null,"abstract":"Our research aims at developing intelligent systems to reduce the transportation-related energy expenditure of a large city by influencing individual behavior. We introduce Copter - an intelligent travel assistant that evaluates multi-modal travel alternatives to find a plan that is acceptable to a person given their context and preferences. We propose a formulation for acceptable planning that brings together ideas from AI, machine learning, and economics. This formulation has been incorporated in Copter producing acceptable plans in real-time. We adopt a novel empirical evaluation framework that combines human decision data with high-fidelity simulation to demonstrate a 4% energy reduction and 20% delay reduction in a realistic deployment scenario in Los Angeles, California, USA.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"116 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120876759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}