Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
Mapping Missing Population in Rural India: A Deep Learning Approach with Satellite Imagery 绘制印度农村失踪人口:卫星图像的深度学习方法
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314263
Wenjie Hu, Jay Patel, Zoe-Alanah Robert, P. Novosad, S. Asher, Zhongyi Tang, M. Burke, D. Lobell, Stefano Ermon
{"title":"Mapping Missing Population in Rural India: A Deep Learning Approach with Satellite Imagery","authors":"Wenjie Hu, Jay Patel, Zoe-Alanah Robert, P. Novosad, S. Asher, Zhongyi Tang, M. Burke, D. Lobell, Stefano Ermon","doi":"10.1145/3306618.3314263","DOIUrl":"https://doi.org/10.1145/3306618.3314263","url":null,"abstract":"Millions of people worldwide are absent from their country's census. Accurate, current, and granular population metrics are critical to improving government allocation of resources, to measuring disease control, to responding to natural disasters, and to studying any aspect of human life in these communities. Satellite imagery can provide sufficient information to build a population map without the cost and time of a government census. We present two Convolutional Neural Network (CNN) architectures which efficiently and effectively combine satellite imagery inputs from multiple sources to accurately predict the population density of a region. In this paper, we use satellite imagery from rural villages in India and population labels from the 2011 SECC census. Our best model achieves better performance than previous papers as well as LandScan, a community standard for global population distribution.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"65 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116175582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands 不服从中的机智:对不道德命令的实用主义反应的需要
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314241
R. Jackson, Ruchen Wen, T. Williams
{"title":"Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands","authors":"R. Jackson, Ruchen Wen, T. Williams","doi":"10.1145/3306618.3314241","DOIUrl":"https://doi.org/10.1145/3306618.3314241","url":null,"abstract":"There is a significant body of research seeking to enable moral decision making and ensure moral conduct in robots. One aspect of moral conduct is rejecting immoral human commands. For social robots, which are expected to follow and maintain human moral and sociocultural norms, it is especially important not only to engage in moral decision making, but also to properly communicate moral reasoning. We thus argue that it is critical for robots to carefully phrase command rejections. Specifically, the degree of politeness-theoretic face threat in a command rejection should be proportional to the severity of the norm violation motivating that rejection. We present a human subjects experiment showing some of the consequences of miscalibrated responses, including perceptions of the robot as inappropriately polite, direct, or harsh, and reduced robot likeability. This experiment intends to motivate and inform the design of algorithms to tactfully tune pragmatic aspects of command rejections autonomously.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130324883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data 算法刻板印象:从过去数据中概括公平性的含义
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314312
D. McNamara
{"title":"Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data","authors":"D. McNamara","doi":"10.1145/3306618.3314312","DOIUrl":"https://doi.org/10.1145/3306618.3314312","url":null,"abstract":"Background Algorithms are used to make or support decisions about people in a wide variety of contexts including the provision of financial credit, judicial risk assessments, applicant screening for employment, and online ad selection. Such algorithms often make predictions about the future behavior of individuals by generalizing from data recording the past behaviors of other individuals. Concerns have arisen about the fairness of these algorithms. Researchers have responded by developing definitions of fairness and algorithm designs that incorporate these definitions [2]. A common theme is the avoidance of discrimination on the basis of group membership, such as race or gender. This may be more complex than simply excluding the explicit consideration of an individual’s group membership, because other characteristics may be correlated with this group membership – a phenomenon known as redundant encoding [5]. Different definitions of fairness may be invoked by different stakeholders. The controversy associated with the COMPAS recidivism prediction system used in some parts of the United States showed this in practice. News organization ProPublica critiqued the system as unfair since among non-reoffenders, African-Americans were more likely to be marked high risk than whites, while among re-offenders, whites were more likely to be marked low risk than African-Americans [1]. COMPAS owner Equivant (formerly Northpointe) argued that the algorithm was not unfair since among those marked high risk, African-Americans were no less likely to reoffend than whites [4].","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129754395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Value of Trustworthy AI 值得信赖的人工智能的价值
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314228
D. Danks
{"title":"The Value of Trustworthy AI","authors":"D. Danks","doi":"10.1145/3306618.3314228","DOIUrl":"https://doi.org/10.1145/3306618.3314228","url":null,"abstract":"Trust is one of the most critical relations in our human lives, whether trust in one another, trust in the artifacts that we use everyday, or trust of an AI system. Even a cursory examination of the literatures in human-computer interaction, human-robot interaction, and numerous other disciplines reveals a deep, persistent concern with the nature of trust in AI, and the conditions under which it can be generated, reduced, repaired, or influenced. At a high level, we often understand trust as a relation in which the trustor makes oneself vulnerable based on positive expectations about the behavior or intentions of the trustee [1]. For example, when I trust my car to start in the morning, I make myself vulnerable (e.g., I risk that I will be late to work if it does not start) because I have the positive expectation that it actually will start. This high-level characterization is relatively unhelpful, however, particularly given the wide range of disciplines that have examined the relation of trust, ranging from organizational behavior to game theory to ethics to cognitive science. The picture that emerges from, for example, social psychology (i.e., two distinct kinds of trust depending on whether one knows the trustee's behaviors or intentions/ values) appears to be quite different from the one that emerges from moral philosophy (i.e., a single, highly-moralized notion), even though both are consistent with this high-level characterization. This talk first introduces that diversity of types of 'trust', but then argues that we can make progress towards a unified characterization by focusing on the function of trust. That is, we should ask why care whether we can trust our artifacts, AI, or fellow humans, as that can help to illuminate features of trust that are shared across domains, trustors, and trustees. I contend that one reason to desire trust is an \"almost-necessary\" condition on ethical action: namely, that the user has a reasonable belief that the system (whether human or machine) will behave approximately as intended. This condition is obviously not sufficient for ethical use, nor is it strictly necessary since the best available option might nonetheless be one for which the user lacks appropriate reasonable beliefs. Nonetheless, it provides a reasonable starting point for an analysis of 'trust'. More precisely, I propose that this condition indicates a role for trust as providing precisely those reasonable beliefs, at least when we have appropriately grounded trust. That is, we can understand 'appropriate trust' as obtaining when the trustor has justified beliefs that the trustee has suitable dispositions. As there is variation in the trustor's goals and values, and also the openness of the context of use, then different specific versions of 'appropriate trust' result as those variations lead to different types of focal dispositions, specific dispositions, or observability of dispositions, respectively. For example, in an open context (i.e","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115033914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Popularity Bias in Ranking and Recommendation 排名和推荐中的人气偏差
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314309
Himan Abdollahpouri
{"title":"Popularity Bias in Ranking and Recommendation","authors":"Himan Abdollahpouri","doi":"10.1145/3306618.3314309","DOIUrl":"https://doi.org/10.1145/3306618.3314309","url":null,"abstract":"Many recommender systems suffer from popularity bias: popular items are recommended frequently while less popular, niche products, are recommended rarely or not at all. However, recommending the ignored products in the \"long tail\" is critical for businesses as they are less likely to be discovered. Popularity bias is also against social justice where the entities need to have a fair chance of being served and represented. In this work, I investigate the problem of popularity bias in recommender systems and develop algorithms to address this problem.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132776753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
A Framework for Benchmarking Discrimination-Aware Models in Machine Learning 机器学习中判别感知模型的基准测试框架
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314262
Rodrigo L. Cardoso, Wagner Meira, Jr, Virgílio A. F. Almeida, Mohammed J. Zaki
{"title":"A Framework for Benchmarking Discrimination-Aware Models in Machine Learning","authors":"Rodrigo L. Cardoso, Wagner Meira, Jr, Virgílio A. F. Almeida, Mohammed J. Zaki","doi":"10.1145/3306618.3314262","DOIUrl":"https://doi.org/10.1145/3306618.3314262","url":null,"abstract":"Discrimination-aware models in machine learning are a recent topic of study that aim to minimize the adverse impact of machine learning decisions for certain groups of people due to ethical and legal implications. We propose a benchmark framework for assessing discrimination-aware models. Our framework consists of systematically generated biased datasets that are similar to real world data, created by a Bayesian network approach. Experimental results show that we can assess the quality of techniques through known metrics of discrimination, and our flexible framework can be extended to most real datasets and fairness measures to support a diversity of assessments.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116274215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Towards Empathetic Planning and Plan Recognition 移情计划和计划识别
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314307
Maayan Shvo
{"title":"Towards Empathetic Planning and Plan Recognition","authors":"Maayan Shvo","doi":"10.1145/3306618.3314307","DOIUrl":"https://doi.org/10.1145/3306618.3314307","url":null,"abstract":"Every compassionate and functioning society requires its members to have a capacity to adopt others' perspectives. As Artificial Intelligence (AI) systems are given increasingly sensitive and impactful roles in society, it is important to enable AI to wield empathy as a tool to benefit those it interacts with. In this paper, we work towards this goal by bringing together a number of important concepts: empathy, AI planning, and plan recognition (i.e., the problem of inferring an actor's plan and goal given observations about its behavior). We formalize the notions of Empathetic Planning and Empathetic Plan Recognition which are informed by the beliefs and affective state of the actor, and propose AI planning-based computational approaches. We illustrate the benefits of our approach by conducting a study with human participants.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115488926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
(When) Can AI Bots Lie? (何时)人工智能机器人会撒谎?
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314281
T. Chakraborti, S. Kambhampati
{"title":"(When) Can AI Bots Lie?","authors":"T. Chakraborti, S. Kambhampati","doi":"10.1145/3306618.3314281","DOIUrl":"https://doi.org/10.1145/3306618.3314281","url":null,"abstract":"The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116131975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Specifying AI Objectives as a Human-AI Collaboration problem 将AI目标指定为人类-AI协作问题
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314227
A. Dragan
{"title":"Specifying AI Objectives as a Human-AI Collaboration problem","authors":"A. Dragan","doi":"10.1145/3306618.3314227","DOIUrl":"https://doi.org/10.1145/3306618.3314227","url":null,"abstract":"Estimation, planning, control, and learning are giving us robots that can generate good behavior given a specified objective and set of constraints. What I care about is how humans enter this behavior generation picture, and study two complementary challenges: 1) how to optimize behavior when the robot is not acting in isolation, but needs to coordinate or collaborate with people; and 2) what to optimize in order to get the behavior we want. My work has traditionally focused on the former, but more recently I have been casting the latter as a human-robot collaboration problem as well (where the human is the end-user, or even the robotics engineer building the system). Treating it as such has enabled us to use robot actions to gain information; to account for human pedagogic behavior; and to exchange information between the human and the robot via a plethora of communication channels, from external forces that the person physically applies to the robot, to comparison queries, to defining a proxy objective function.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115070858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules 基于maxsat的可解释分类规则学习的增量框架
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314283
Bishwamittra Ghosh, Kuldeep S. Meel
{"title":"IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules","authors":"Bishwamittra Ghosh, Kuldeep S. Meel","doi":"10.1145/3306618.3314283","DOIUrl":"https://doi.org/10.1145/3306618.3314283","url":null,"abstract":"The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end users to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability. Motivated by the success of MaxSAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, the runtime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e., a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances? In this paper, we take a step towards answering the above question in affirmation. We propose IMLI: an incremental approach to MaxSAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信