{"title":"Framing Artificial Intelligence in American Newspapers","authors":"C. Chuan, W. Tsai, Sumi Cho","doi":"10.1145/3306618.3314285","DOIUrl":"https://doi.org/10.1145/3306618.3314285","url":null,"abstract":"Publics' perceptions of new scientific advances such as AI are often informed and influenced by news coverage. To understand how artificial intelligence (AI) was framed in U.S. newspapers, a content analysis based on framing theory in journalism and science communication was conducted. This study identified the dominant topics and frames, as well as the risks and benefits of AI covered in five major American newspapers from 2009 to 2018. Results indicated that business and technology were the primary topics in news coverage of AI. The benefits of AI were discussed more frequently than its risks, but risks of AI were generally discussed with greater specificity. Additionally, episodic issue framing and societal impact framing were more frequently used.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133386882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Epistemic Therapy for Bias in Automated Decision-Making","authors":"T. Gilbert, Yonatan Dov Mintz","doi":"10.1145/3306618.3314294","DOIUrl":"https://doi.org/10.1145/3306618.3314294","url":null,"abstract":"Despite recent interest in both the critical and machine learning literature on \"bias\" in artificial intelligence (AI) systems, the nature of specific biases stemming from the interaction of machines, humans, and data remains ambiguous. Influenced by Gendler's work on human cognitive biases, we introduce the concept of alief-discordant belief, the tension between the intuitive moral dispositions of designers and the explicit representations generated by algorithms. Our discussion of alief-discordant belief diagnoses the ethical concerns that arise when designing AI systems atop human biases. We furthermore codify the relationship between data, algorithms, and engineers as components of this cognitive discordance, comprising a novel epistemic framework for ethics in AI.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132172597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures","authors":"Daniel Susser","doi":"10.1145/3306618.3314286","DOIUrl":"https://doi.org/10.1145/3306618.3314286","url":null,"abstract":"For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions--what behavioral economists call our choice architectures--are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings--the choice architectures--to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us--effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123933870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Emotional Intelligence in Social Robots Designed for Children","authors":"De'Aira G. Bryant","doi":"10.1145/3306618.3314319","DOIUrl":"https://doi.org/10.1145/3306618.3314319","url":null,"abstract":"Social robots are robots designed to interact and communicate directly with humans, following traditional social norms. However, many of these current robots operate in discrete settings with predefined expectations for specific social interactions. In order for these machines to operate in the real world, they must be capable of understanding the multiple factors that contribute to human-human interaction. One such factor is emotional intelligence. Emotional intelligence allows one to consider the emotional state of another in order to motivate, plan, and achieve one's desires. One common method of analyzing the emotional state of an individual involves analyzing the emotion displayed on their face. Several artificial intelligence (AI) systems have been developed to conduct this task. These systems are often classifiers trained using a variety of machine learning techniques which require large amounts of training data. As such, they are susceptible to biases that may appear during performance analyses due to disproportions existing in training datasets. Children, in particular, are often less represented in the primary datasets of annotated faces used for training such emotion classification systems. This work seeks to first analyze the extent of these performance differences in commercial systems, then to present new computational techniques that work to mitigate some of the effects of minimal representation in datasets, and to finally present a social robot which utilizes an improved emotional AI to interact with children in various scenarios where emotional intelligence is key to successful human-robot interaction.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117170853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Giattino, Lydia Kwong, Chad Rafetto, Nita A. Farahany
{"title":"The Seductive Allure of Artificial Intelligence-Powered Neurotechnology","authors":"C. Giattino, Lydia Kwong, Chad Rafetto, Nita A. Farahany","doi":"10.1145/3306618.3314269","DOIUrl":"https://doi.org/10.1145/3306618.3314269","url":null,"abstract":"Neuroscience explanations-even when completely irrelevant-have been shown to exert a \"seductive allure\" on individuals, leading them to judge bad explanations or arguments more favorably. There seems to be a similarly seductive allure for artificial intelligence (AI) technologies, leading people to \"overtrust\" these systems, even when they have just witnessed the system perform poorly. The AI-powered neurotechnologies that have begun to proliferate in recent years, particularly those based on electroencephalography (EEG), represent a potentially doubly-alluring combination. While there is enormous potential benefit in applying AI techniques in neuroscience to \"decode\" brain activity and associated mental states, these efforts are still in the early stages, and there is a danger in using these unproven technologies prematurely, especially in important, real-world contexts. Yet, such premature use has begun to emerge in several high-stakes set-tings, including the law, health & wellness, employment, and transportation. In light of the potential seductive allure of these technologies, we need to be vigilant in monitoring their scientific validity and challenging both unsubstantiated claims and misuse, while still actively supporting their continued development and proper use.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115366892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jess Whittlestone, Rune Nyrup, A. Alexandrova, S. Cave
{"title":"The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions","authors":"Jess Whittlestone, Rune Nyrup, A. Alexandrova, S. Cave","doi":"10.1145/3306618.3314289","DOIUrl":"https://doi.org/10.1145/3306618.3314289","url":null,"abstract":"The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132854883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence's Impact on Mental Health Treatments","authors":"Michelle C. Ausman","doi":"10.1145/3306618.3314311","DOIUrl":"https://doi.org/10.1145/3306618.3314311","url":null,"abstract":"An interest in artificial intelligence (AI) as a medical aid stemmed as research on mental health and psychology increased. Yet despite failing the Turing Test, AI continues to be used as a practical aid in the psychological community. From virtual reality simulations of everyday activities to robotic pet seals implemented in nursing homes, AI has found a home in the psychological field as a support for those in the medical field as well as those taking care of loved ones. In this paper, I aim to look at the stages of the Turing Test, how those are related to factoid and non-factoid questions and how current applications of AI are used in mental health treatments.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129607189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Amini, A. Soleimany, Wilko Schwarting, S. Bhatia, D. Rus
{"title":"Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure","authors":"Alexander Amini, A. Soleimany, Wilko Schwarting, S. Bhatia, D. Rus","doi":"10.1145/3306618.3314243","DOIUrl":"https://doi.org/10.1145/3306618.3314243","url":null,"abstract":"Recent research has highlighted the vulnerabilities of modern machine learning based systems to bias, especially towards segments of society that are under-represented in training data. In this work, we develop a novel, tunable algorithm for mitigating the hidden, and potentially unknown, biases within training data. Our algorithm fuses the original learning task with a variational autoencoder to learn the latent structure within the dataset and then adaptively uses the learned latent distributions to re-weight the importance of certain data points while training. While our method is generalizable across various data modalities and learning tasks, in this work we use our algorithm to address the issue of racial and gender bias in facial detection systems. We evaluate our algorithm on the Pilot Parliaments Benchmark (PPB), a dataset specifically designed to evaluate biases in computer vision systems, and demonstrate increased overall performance as well as decreased categorical bias with our debiasing approach.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129135579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI","authors":"J. Hernández-Orallo, Karina Vold","doi":"10.1145/3306618.3314238","DOIUrl":"https://doi.org/10.1145/3306618.3314238","url":null,"abstract":"Humans and AI systems are usually portrayed as separate systems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabilities, posing new philosophical, ethical and technical challenges. To analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cognitive capabilities that can foreseeably be extended by AI and examine their ethical implications.We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127765478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning","authors":"McKane Andrus, T. Gilbert","doi":"10.1145/3306618.3314275","DOIUrl":"https://doi.org/10.1145/3306618.3314275","url":null,"abstract":"While formal definitions of fairness in machine learning (ML) have been proposed, its place within a broader institutional model of fair decision-making remains ambiguous. In this paper we interpret ML as a tool for revealing when and how measures fail to capture purported constructs of interest, augmenting a given institution's understanding of its own interventions and priorities. Rather than codifying \"fair\" principles into ML models directly, the use of ML can thus be understood as a form of quality assurance for existing institutions, exposing the epistemic fault lines of their own measurement practices. Drawing from Friedler et al's [2016] recent discussion of representational mappings and previous discussions on the ontology of measurement, we propose a social measurement assurance program (sMAP) in which ML encourages expert deliberation on a given decision-making procedure by examining unanticipated or previously unexamined covariates. As an example, we apply Rawlsian principles of fairness to sMAP and produce a provisional just theory of measurement that would guide the use of ML for achieving fairness in the case of child abuse in Allegheny County.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122593818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}