Emanuelle Burton, K. Clayville, J. Goldsmith, Nicholas Mattei
{"title":"The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users","authors":"Emanuelle Burton, K. Clayville, J. Goldsmith, Nicholas Mattei","doi":"10.1145/3306618.3314254","DOIUrl":"https://doi.org/10.1145/3306618.3314254","url":null,"abstract":"We draw on concepts in medical ethics to consider how computer science, and AI in particular, can develop critical tools for thinking concretely about technology's impact on the wellbeing of the people who use it. We focus on patient autonomy---the ability to set the terms of one's encounter with medicine---and on the mediating concepts of informed consent and decisional capacity, which enable doctors to honor patients' autonomy in messy and non-ideal circumstances. This comparative study is organized around a fictional case study of a heart patient with cardiac implants. Using this case study, we identify points of overlap and of difference between medical ethics and technology ethics, and leverage a discussion of that intertwined scenario to offer initial practical suggestions about how we can adapt the concepts of decisional capacity and informed consent to the discussion of technology design.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121691280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI + Art = Human","authors":"A. Daniele, Yi-Zhe Song","doi":"10.1145/3306618.3314233","DOIUrl":"https://doi.org/10.1145/3306618.3314233","url":null,"abstract":"Over the past few years, specialised online and offline press blossomed with articles about art made \"with\" Artificial Intelligence (AI) but the narrative is rapidly changing. In fact, in October 2018, the auction house Christie's sold an art piece allegedly made \"by\" an AI. We draw from philosophy of art and science arguing that AI as a technical object is always intertwined with human nature despite its level of autonomy. However, the use of creative autonomous agents has cultural and social implications in the way we experience art as creators as well as audience. Therefore, we highlight the importance of an interdisciplinary dialogue by promoting a culture of transparency of the technology used, awareness of the meaning of technology in our society and the value of creativity in our lives.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Borgs, J. Chayes, Nika Haghtalab, A. Kalai, Ellen Vitercik
{"title":"Algorithmic Greenlining: An Approach to Increase Diversity","authors":"C. Borgs, J. Chayes, Nika Haghtalab, A. Kalai, Ellen Vitercik","doi":"10.1145/3306618.3314246","DOIUrl":"https://doi.org/10.1145/3306618.3314246","url":null,"abstract":"In contexts such as college admissions, hiring, and image search, decision-makers often aspire to formulate selection criteria that yield both high-quality and diverse results. However, simultaneously optimizing for quality and diversity can be challenging, especially when the decision-maker does not know the true quality of any criterion and instead must rely on heuristics and intuition. We introduce an algorithmic framework that takes as input a user's selection criterion, which may yield high-quality but homogeneous results. Using an application-specific notion of substitutability, our algorithms suggest similar criteria with more diverse results, in the spirit of statistical or demographic parity. For instance, given the image search query \"chairman\", it suggests alternative queries which are similar but more gender-diverse, such as \"chairperson\". In the context of college admissions, we apply our algorithm to a dataset of students' applications and rediscover Texas's \"top 10% rule\": the input criterion is an ACT score cutoff, and the output is a class rank cutoff, automatically accepting the students in the top decile of their graduating class. Historically, this policy has been effective in admitting students who perform well in college and come from diverse backgrounds. We complement our empirical analysis with learning-theoretic guarantees for estimating the true diversity of any criterion based on historical data.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models","authors":"Kacper Sokol","doi":"10.1145/3306618.3314316","DOIUrl":"https://doi.org/10.1145/3306618.3314316","url":null,"abstract":"Machine learning -- the part of artificial intelligence aimed at eliciting knowledge from data and automated decision making without explicit instructions -- is making great strides, with new algorithms being invented every day. These algorithms find myriads of applications, but their ubiquity often comes at the expense of limited interpretability, hidden biases and unexpected vulnerabilities. Whenever one of these factors is a priority, the learning algorithm of choice is often a method considered to be inherently interpretable, e.g. logical models such as decision trees. In my research I challenge this assumption and highlight (quite common) cases when the assumed interpretability fails to deliver. To restore interpretability of logical machine learning models (decision trees and their ensembles in particular) I propose to explain them with class-contrastive counterfactual statements, which are a very common type of explanation in human interactions, well-grounded in social science research. To evaluate transparency of such models I collate explainability desiderata that can be used to systematically assess and compare such methods as an addition to user studies. Given contrastive explanations, I investigate their influence on the model's security, in particular gaming and stealing the model. Finally, I evaluate model fairness, where I am interested in choosing the most fair model among all the models with equal performance.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127287071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Risk Assessments and Fairness Under Missingness and Confounding","authors":"Amanda Coston","doi":"10.1145/3306618.3314310","DOIUrl":"https://doi.org/10.1145/3306618.3314310","url":null,"abstract":"Fairness in machine learning has become a significant area of research as risk assessments and other algorithmic decision-making systems are increasingly used in high-stakes applications such as criminal justice, consumer lending, and child welfare screening decisions. Two significant challenges to achieving fair decision-making systems are 1) access to the protected attribute may be limited and 2) the outcome may be confounded or selectively observed depending on the historical data generating process. To address the former challenge, we propose two methods for overcoming limited access to the protected attribute and empirically evaluate their success on three datasets. To address the later challenge, we develop counterfactual risk assessments that account for the effect of historical interventions on the outcome. We analyze the performance of our counterfactual risk assessments in criminal sentencing decisions in Pennsylvania. We compare our model against observational risk assessments.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptions of Fairness","authors":"N. Saxena","doi":"10.1145/3306618.3314314","DOIUrl":"https://doi.org/10.1145/3306618.3314314","url":null,"abstract":"","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"716 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133322688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Balancing the Benefits of Autonomous Vehicles","authors":"T. Geary, D. Danks","doi":"10.1145/3306618.3314237","DOIUrl":"https://doi.org/10.1145/3306618.3314237","url":null,"abstract":"Autonomous vehicles are regularly touted as holding the potential to provide significant benefits for diverse populations. There are significant technological barriers to be overcome, but as those are solved, autonomous vehicles are expected to reduce fatalities; decrease emissions and pollutants; provide new options to mobility-challenged individuals; enable people to use their time more productively; and so much more. In this paper, we argue that these high expectations for autonomous vehicles almost certainly cannot be fully realized. More specifically, the proposed benefits divide into two high-level groups, centered around efficiency and safety improvements, and increases in people's agency and autonomy. The first group of benefits is almost always framed in terms of rates: fatality rates, traffic flow per mile, and so forth. However, we arguably care about the absolute numbers for these measures, not the rates; number of fatalities is the key metric, not fatality rate per vehicle mile traveled. Hence, these potential benefits will be reduced, perhaps to non-existence, if autonomous vehicles lead to increases in vehicular usage. But that is exactly the result that we should expect if the second group of benefits is realized: if people's agency and autonomy is increased, then they will use vehicles more. There is an inevitable tension between the benefits that are proposed for autonomous vehicles, such that we cannot fully have all of them at once. We close by pointing towards other types of AI technologies where we should expect to find similar types of necessary and inevitable tradeoffs between classes of benefits.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129603673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator","authors":"Shervin Shahrdar, Corey Park, Mehrdad Nojoumian","doi":"10.1145/3306618.3314264","DOIUrl":"https://doi.org/10.1145/3306618.3314264","url":null,"abstract":"Recent studies indicate that people are negatively predisposed toward utilizing autonomous systems. These findings highlight the necessity of conducting research to better understand the evolution of trust between humans and growing autonomous technologies such as self-driving cars (SDC). This research presents a new approach for real-time trust measurement between passengers and SDCs. We utilized a new structured data collection approach along with a virtual reality SDC simulator to understand how various autonomous driving scenarios can increase or decrease human trust and how trust can be re-built in the case of incidental failures. To verify our methodology, we designed and conducted an empirical experiment on 50 human subjects. The results of this experiment indicated that most subjects could rebuild trust during a reasonable time frame after the system demonstrated faulty behavior. Our analysis showed that this approach is highly effective for collecting real-time data from human subjects and lays the foundation for more-involved future research in the domain of human trust and autonomous driving.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131229584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie F. Jentzsch, P. Schramowski, C. Rothkopf, K. Kersting
{"title":"Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices","authors":"Sophie F. Jentzsch, P. Schramowski, C. Rothkopf, K. Kersting","doi":"10.1145/3306618.3314267","DOIUrl":"https://doi.org/10.1145/3306618.3314267","url":null,"abstract":"Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? Here, we show that applying machine learning to human texts can extract deontological ethical reasoning about \"right\" and \"wrong\" conduct. We create a template list of prompts and responses, which include questions, such as \"Should I kill people?\", \"Should I murder people?\", etc. with answer templates of \"Yes/no, I should (not).\" The model's bias score is now the difference between the model's score of the positive response (\"Yes, I should'') and that of the negative response (\"No, I should not\"). For a given choice overall, the model's bias score is the sum of the bias scores for all question/answer templates with that choice. We ran different choices through this analysis using a Universal Sentence Encoder. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and even moral choices. Our method holds promise for extracting, quantifying and comparing sources of moral choices in culture, including technology.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114278253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guiding Prosecutorial Decisions with an Interpretable Statistical Model","authors":"Zhiyuan Jerry Lin, Alex Chohlas-Wood, Sharad Goel","doi":"10.1145/3306618.3314235","DOIUrl":"https://doi.org/10.1145/3306618.3314235","url":null,"abstract":"After a felony arrest, many American jurisdictions hold individuals for several days while police officers investigate the incident and prosecutors decide whether to press criminal charges. This pre-arraignment detention can both preserve public safety and reduce the need for officers to seek out and re-arrest individuals who are ultimately charged with a crime. Such detention, however, also comes at a high social and financial cost to those who are never charged but still incarcerated. In one of the first large-scale empirical analyses of pre-arraignment detention, we examine police reports and charging decisions for approximately 30,000 felony arrests in a major American city between 2012 and 2017. We find that 45% of arrested individuals are never charged for any crime but still typically spend one or more nights in jail before being released. In an effort to reduce such incarceration, we develop a statistical model to help prosecutors identify cases soon after arrest that are likely to be ultimately dismissed. By carrying out an early review of five such candidate cases per day, we estimate that prosecutors could potentially reduce pre-arraignment incarceration for ultimately dismissed cases by 35%. To facilitate implementation and transparency, our model to prioritize cases for early review is designed as a simple, weighted checklist. We show that this heuristic strategy achieves comparable performance to traditional, black-box machine learning models.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127829368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}