Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
Human-AI Learning Performance in Multi-Armed Bandits 人工智能在多武装土匪中的学习表现
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-12-21 DOI: 10.1145/3306618.3314245
Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, A. Dragan
{"title":"Human-AI Learning Performance in Multi-Armed Bandits","authors":"Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, A. Dragan","doi":"10.1145/3306618.3314245","DOIUrl":"https://doi.org/10.1145/3306618.3314245","url":null,"abstract":"People frequently face challenging decision-making problems in which outcomes are uncertain or unknown. Artificial intelligence (AI) algorithms exist that can outperform humans at learning such tasks. Thus, there is an opportunity for AI agents to assist people in learning these tasks more effectively. In this work, we use a multi-armed bandit as a controlled setting in which to explore this direction. We pair humans with a selection of agents and observe how well each human-agent team performs. We find that team performance can beat both human and agent performance in isolation. Interestingly, we also find that an agent's performance in isolation does not necessarily correlate with the human-agent team's performance. A drop in agent performance can lead to a disproportionately large drop in team performance, or in some settings can even improve team performance. Pairing a human with an agent that performs slightly better than them can make them perform much better, while pairing them with an agent that performs the same can make them them perform much worse. Further, our results suggest that people have different exploration strategies and might perform better with agents that match their strategy. Overall, optimizing human-agent team performance requires going beyond optimizing agent performance, to understanding how the agent's suggestions will influence human decision-making.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"32 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120980799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
What are the Biases in My Word Embedding? 我的词嵌入有什么偏差?
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-12-20 DOI: 10.1145/3306618.3314270
Nathaniel Swinger, Maria De-Arteaga, IV NeilThomasHeffernan, Mark D. M. Leiserson, A. Kalai
{"title":"What are the Biases in My Word Embedding?","authors":"Nathaniel Swinger, Maria De-Arteaga, IV NeilThomasHeffernan, Mark D. M. Leiserson, A. Kalai","doi":"10.1145/3306618.3314270","DOIUrl":"https://doi.org/10.1145/3306618.3314270","url":null,"abstract":"This paper presents an algorithm for enumerating biases in word embeddings. The algorithm exposes a large number of offensive associations related to sensitive features such as race and gender on publicly available embeddings, including a supposedly \"debiased\" embedding. These biases are concerning in light of the widespread use of word embeddings. The associations are identified by geometric patterns in word embeddings that run parallel between people's names and common lower-case tokens. The algorithm is highly unsupervised: it does not even require the sensitive features to be pre-specified. This is desirable because: (a) many forms of discrimination?such as racial discrimination-are linked to social constructs that may vary depending on the context, rather than to categories with fixed definitions; and (b) it makes it easier to identify biases against intersectional groups, which depend on combinations of sensitive features. The inputs to our algorithm are a list of target tokens, e.g. names, and a word embedding. It outputs a number of Word Embedding Association Tests (WEATs) that capture various biases present in the data. We illustrate the utility of our approach on publicly available word embeddings and lists of names, and evaluate its output using crowdsourcing. We also show how removing names may not remove potential proxy bias.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1998 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128229910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Using Deceased-Donor Kidneys to Initiate Chains of Living Donor Kidney Paired Donations: Algorithm and Experimentation 利用已故供者肾脏启动活体供者肾脏配对捐献链:算法与实验
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-12-17 DOI: 10.1145/3306618.3314276
Cristina Cornelio, L. Furian, Antonio Nicolò, Francesca Rossi
{"title":"Using Deceased-Donor Kidneys to Initiate Chains of Living Donor Kidney Paired Donations: Algorithm and Experimentation","authors":"Cristina Cornelio, L. Furian, Antonio Nicolò, Francesca Rossi","doi":"10.1145/3306618.3314276","DOIUrl":"https://doi.org/10.1145/3306618.3314276","url":null,"abstract":"We design a flexible algorithm that exploits deceased donor kidneys to initiate chains of living donor kidney paired donations, combining deceased and living donor allocation mechanisms to improve the quantity and quality of kidney transplants. The advantages of this approach have been measured using retrospective data on the pool of donor/recipient incompatible and desensitized pairs at the Padua University Hospital, the largest center for living donor kidney transplants in Italy. The experiments show a remarkable improvement on the number of patients with incompatible donor who could be transplanted, a decrease in the number of desensitization procedures, and an increase in the number of UT patients (that is, patients unlikely to be transplanted for immunological reasons) in the waiting list who could receive an organ.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders 构建小蟋蟀:利益相关者之间道德协议的架构
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-12-11 DOI: 10.1145/3306618.3314257
B. Liao, M. Slavkovik, Leendert van der Torre
{"title":"Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders","authors":"B. Liao, M. Slavkovik, Leendert van der Torre","doi":"10.1145/3306618.3314257","DOIUrl":"https://doi.org/10.1145/3306618.3314257","url":null,"abstract":"An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end-users. We address the challenge of how the moral values and views of all stakeholders can be integrated and reflected in the moral behavior of the autonomous system. We propose an artificial moral agent architecture that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. We show how our architecture can be used not only for ethical practical reasoning and collaborative decision-making, but also for the explanation of such moral behavior.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128109147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Toward the Engineering of Virtuous Machines 迈向良性机器的工程
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-12-07 DOI: 10.1145/3306618.3314256
Naveen Sundar Govindarajulu, S. Bringsjord, Rikhiya Ghosh, Vasanth Sarathy
{"title":"Toward the Engineering of Virtuous Machines","authors":"Naveen Sundar Govindarajulu, S. Bringsjord, Rikhiya Ghosh, Vasanth Sarathy","doi":"10.1145/3306618.3314256","DOIUrl":"https://doi.org/10.1145/3306618.3314256","url":null,"abstract":"While various traditions under the 'virtue ethics' umbrella have been studied extensively and advocated by ethicists, it has not been clear that there exists a version of virtue ethics rigorous enough to be a target for machine ethics (which we take to include the engineering of an ethical sensibility in a machine or robot itself, not only the study of ethics in the humans who might create artificial agents). We begin to address this by presenting an embryonic formalization of a key part of any virtue-ethics theory: namely, the learning of virtue by a focus on exemplars of moral virtue. Our work is based in part on a computational formal logic previously used to formally model other ethical theories and principles therein, and to implement these models in artificial agents.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2 系统1和系统2的强化学习和逆强化学习
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-11-19 DOI: 10.1145/3306618.3314259
A. Peysakhovich
{"title":"Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2","authors":"A. Peysakhovich","doi":"10.1145/3306618.3314259","DOIUrl":"https://doi.org/10.1145/3306618.3314259","url":null,"abstract":"Inferring a person's goal from their behavior is an important problem in applications of AI (e.g. automated assistants, recommender systems). The workhorse model for this task is the rational actor model - this amounts to assuming that people have stable reward functions, discount the future exponentially, and construct optimal plans. Under the rational actor assumption techniques such as inverse reinforcement learning (IRL) can be used to infer a person's goals from their actions. A competing model is the dual-system model. Here decisions are the result of an interplay between a fast, automatic, heuristic-based system 1 and a slower, deliberate, calculating system 2. We generalize the dual system framework to the case of Markov decision problems and show how to compute optimal plans for dual-system agents. We show that dual-system agents exhibit behaviors that are incompatible with rational actor assumption. We show that naive applications of rational-actor IRL to the behavior of dual-system agents can generate wrong inference about the agents' goals and suggest interventions that actually reduce the agent's overall utility. Finally, we adapt a simple IRL algorithm to correctly infer the goals of dual system decision-makers. This allows us to make interventions that help, rather than hinder, the dual-system agent's ability to reach their true goals.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
TrolleyMod v1.0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles TrolleyMod v1.0:面向自动驾驶车辆伦理决策的开源仿真与数据采集平台
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-11-14 DOI: 10.1145/3306618.3314239
Vahid Behzadan, J. Minton, Arslan Munir
{"title":"TrolleyMod v1.0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles","authors":"Vahid Behzadan, J. Minton, Arslan Munir","doi":"10.1145/3306618.3314239","DOIUrl":"https://doi.org/10.1145/3306618.3314239","url":null,"abstract":"This paper presents TrolleyMod v1.0, an open-source platform based on the CARLA simulator for the collection of ethical decision-making data for autonomous vehicles. This platform is designed to facilitate experiments aiming to observe and record human decisions and actions in high-fidelity simulations of ethical dilemmas that occur in the context of driving. Targeting experiments in the class of trolley problems, TrolleyMod provides a seamless approach to creating new experimental settings and environments with the realistic physics-engine and the high-quality graphical capabilities of CARLA and the Unreal Engine. Also, TrolleyMod provides a straightforward interface between the CARLA environment and Python to enable the implementation of custom controllers, such as deep reinforcement learning agents. The results of such experiments can be used for sociological analyses, as well as the training and tuning of value-aligned autonomous vehicles based on social values that are inferred from observations.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128718230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TED: Teaching AI to Explain its Decisions TED:教人工智能解释它的决定
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-11-12 DOI: 10.1145/3306618.3314273
N. Codella, M. Hind, K. Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, A. Mojsilovic
{"title":"TED: Teaching AI to Explain its Decisions","authors":"N. Codella, M. Hind, K. Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, A. Mojsilovic","doi":"10.1145/3306618.3314273","DOIUrl":"https://doi.org/10.1145/3306618.3314273","url":null,"abstract":"Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124167806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness 公平的定义如何?调查公众对公平的算法定义的态度
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-11-08 DOI: 10.1145/3306618.3314248
N. Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, D. Parkes, Y. Liu
{"title":"How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness","authors":"N. Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, D. Parkes, Y. Liu","doi":"10.1145/3306618.3314248","DOIUrl":"https://doi.org/10.1145/3306618.3314248","url":null,"abstract":"What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across two online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race of the loan applicants). Overall, one definition (calibrated fairness) tends to be more pre- ferred than the others, and the results also provide support for the principle of affirmative action.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132439350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
Legible Normativity for AI Alignment: The Value of Silly Rules AI对齐的易读规范性:愚蠢规则的价值
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2018-11-03 DOI: 10.1145/3306618.3314258
Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield
{"title":"Legible Normativity for AI Alignment: The Value of Silly Rules","authors":"Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield","doi":"10.1145/3306618.3314258","DOIUrl":"https://doi.org/10.1145/3306618.3314258","url":null,"abstract":"It has become commonplace to assert that autonomous agents will have to be built to follow human rules of behavior--social norms and laws. But human laws and norms are complex and culturally varied systems; in many cases agents will have to learn the rules. This requires autonomous agents to have models of how human rule systems work so that they can make reliable predictions about rules. In this paper we contribute to the building of such models by analyzing an overlooked distinction between important rules and what we call silly rules -- rules with no discernible direct impact on welfare. We show that silly rules render a normative system both more robust and more adaptable in response to shocks to perceived stability. They make normativity more legible for humans, and can increase legibility for AI systems as well. For AI systems to integrate into human normative systems, we suggest, it may be important for them to have models that include representations of silly rules.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123925054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信