Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
Learning Context-Sensitive Norms under Uncertainty 不确定条件下情境敏感规范的学习
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314315
Vasanth Sarathy
{"title":"Learning Context-Sensitive Norms under Uncertainty","authors":"Vasanth Sarathy","doi":"10.1145/3306618.3314315","DOIUrl":"https://doi.org/10.1145/3306618.3314315","url":null,"abstract":"Norms and conventions play a central role in maintaining social order in multi-agent societies [2, 5]. I study the problem of how these norms and conventions can be learned from observation of heterogeneous sources, under conditions of uncertainty. This is necessary as it is not enough to simply hard code a set of norms into a new agent prior to entering society because norms can evolve over time as agents enter and leave the society [9].","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122726885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring Work Task Automatability from AI Expert Evidence 从人工智能专家证据推断工作任务的可自动化性
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314247
Paul Duckworth, L. Graham, Michael A. Osborne
{"title":"Inferring Work Task Automatability from AI Expert Evidence","authors":"Paul Duckworth, L. Graham, Michael A. Osborne","doi":"10.1145/3306618.3314247","DOIUrl":"https://doi.org/10.1145/3306618.3314247","url":null,"abstract":"Despite growing alarm about machine learning technologies automating jobs, there is little good evidence on what activities can be automated using such technologies. We contribute the first dataset of its kind by surveying over 150 top academics and industry experts in machine learning, robotics and AI, receiving over 4,500 ratings of how automatable specific tasks are today. We present a probabilistic machine learning model to learn the patterns connecting expert estimates of task automatability and the skills, knowledge and abilities required to perform those tasks. Our model infers the automatability of over 2,000 work activities, and we show how automation differs across types of activities and types of occupations. Sensitivity analysis identifies the specific skills, knowledge and abilities of activities that drive higher or lower automatability. We provide quantitative evidence of what is perceived to be automatable using the state-of-the-art in machine learning technology. We consider the societal impacts of these results and of task-level approaches.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"113 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123424175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Perceptions of Domestic Robots' Normative Behavior Across Cultures 不同文化对家用机器人规范行为的看法
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314251
Huao Li, Stephanie Milani, Vigneshram Krishnamoorthy, M. Lewis, K. Sycara
{"title":"Perceptions of Domestic Robots' Normative Behavior Across Cultures","authors":"Huao Li, Stephanie Milani, Vigneshram Krishnamoorthy, M. Lewis, K. Sycara","doi":"10.1145/3306618.3314251","DOIUrl":"https://doi.org/10.1145/3306618.3314251","url":null,"abstract":"As domestic service robots become more common and widespread, they must be programmed to efficiently accomplish tasks while aligning their actions with relevant norms. The first step to equip domestic robots with normative reasoning competence is understanding the norms that people apply to the behavior of robots in specific social contexts. To that end, we conducted an online survey of Chinese and United States participants in which we asked them to select the preferred normative action a domestic service robot should take in a number of scenarios. The paper makes multiple contributions. Our extensive survey is the first to: (a) collect data on attitudes of people on normative behavior of domestic robots, (b) across cultures and (c) study relative priorities among norms for this domain. We present our findings and discuss their implications for building computational models for robot normative reasoning.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130189079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
"Scary Robots": Examining Public Responses to AI “可怕的机器人”:调查公众对人工智能的反应
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314232
S. Cave, Katelyn Coughlan, Kanta Dihal
{"title":"\"Scary Robots\": Examining Public Responses to AI","authors":"S. Cave, Katelyn Coughlan, Kanta Dihal","doi":"10.1145/3306618.3314232","DOIUrl":"https://doi.org/10.1145/3306618.3314232","url":null,"abstract":"How AI is perceived by the public can have significant impact on how it is developed, deployed and regulated. Some commentators argue that perceptions are currently distorted or extreme. This paper discusses the results of a nationally representative survey of the UK population on their perceptions of AI. The survey solicited responses to eight common narratives about AI (four optimistic, four pessimistic), plus views on what AI is, how likely it is to impact in respondents' lifetimes, and whether they can influence it. 42% of respondents offered a plausible definition of AI, while 25% thought it meant robots. Of the narratives presented, those associated with automation were best known, followed by the idea that AI would become more powerful than humans. Overall results showed that the most common visions of the impact of AI elicit significant anxiety. Only two of the eight narratives elicited more excitement than concern (AI making life easier, and extending life). Respondents felt they had no control over AI's development, citing the power of corporations or government, or versions of technological determinism. Negotiating the deployment of AI will require contending with these anxieties.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132378024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 94
Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots 机器人可以超越黑白:审视对机器人的种族偏见
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314272
A. Addison, C. Bartneck, K. Yogeeswaran
{"title":"Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots","authors":"A. Addison, C. Bartneck, K. Yogeeswaran","doi":"10.1145/3306618.3314272","DOIUrl":"https://doi.org/10.1145/3306618.3314272","url":null,"abstract":"Previous studies showed that using the 'shooter bias' paradigm, people demonstrate a similar racial bias toward dark colored robots over light colored robots (i.e., Black vs. White) as they do toward humans of similar skin tones [3]. However, such an effect could be argued to be the result of social priming. Additionally, it raises the question of how people might respond to robots that are in the middle of the color spectrum (i.e., brown) and whether such effects are moderated by the perceived anthropomorphism of the robots. We conducted two experiments to first examine whether shooter bias tendencies shown towards robots is driven by social priming, and then whether diversification of robot color and level of anthropomorphism influenced shooter bias. Our results showed that shooter bias was not influenced by social priming, and interestingly, introducing a new color of robot removed shooter bias tendencies entirely. However, varying the anthropomorphism of the robots did not moderate the level of shooter bias, and contrary to our expectations, the robots were not perceived by the participants as having different levels of anthropomorphism.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124494832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Shared Moral Foundations of Embodied Artificial Intelligence 具身人工智能的共同道德基础
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314280
Joëlle M. Cruz
{"title":"Shared Moral Foundations of Embodied Artificial Intelligence","authors":"Joëlle M. Cruz","doi":"10.1145/3306618.3314280","DOIUrl":"https://doi.org/10.1145/3306618.3314280","url":null,"abstract":"Sophisticated AI's will make decisions about how to respond to complex situations, and we may wonder whether those decisions will align with the moral values of human beings. I argue that pessimistic worries about this value alignment problem are overstated. In order to achieve intelligence in its full generality and adaptiveness, cognition in AI's will need to be embodied in the sense of the Embodied Cognition research program. That embodiment will yield AI's that share our moral foundations, namely coordination, sociality, and acknowledgement of shared resources. Consequently, we can expect a broad moral alignment between human beings and AI's. AI's will likely show no more variation in their values than we find amongst human beings.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126214455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products 可操作审计:调查公开命名有偏见的商业人工智能产品性能结果的影响
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314244
Inioluwa Deborah Raji, Joy Buolamwini
{"title":"Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products","authors":"Inioluwa Deborah Raji, Joy Buolamwini","doi":"10.1145/3306618.3314244","DOIUrl":"https://doi.org/10.1145/3306618.3314244","url":null,"abstract":"Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits, as scholarship on the impact of algorithmic audits on increasing algorithmic fairness and transparency in commercial systems is nascent. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis models. This paper 1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, 2) presents new performance metrics from targeted companies IBM, Microsoft and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, 3) provides performance results on PPB by non-target companies Amazon and Kairos and, 4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new API versions. All targets reduced accuracy disparities between males and females and darker and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup, that underwent a 17.7% - 30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72% to 8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133262357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 370
Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements 将公平原则付诸实践:挑战、度量和改进
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-14 DOI: 10.1145/3306618.3314234
Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi
{"title":"Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements","authors":"Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi","doi":"10.1145/3306618.3314234","DOIUrl":"https://doi.org/10.1145/3306618.3314234","url":null,"abstract":"As more researchers have become aware of and passionate about algorithmic fairness, there has been an explosion in papers laying out new metrics, suggesting algorithms to address issues, and calling attention to issues in existing applications of machine learning. This research has greatly expanded our understanding of the concerns and challenges in deploying machine learning, but there has been much less work in seeing how the rubber meets the road. In this paper we provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues. We discuss open questions in implementing equality of opportunity and describe our fairness metric, conditional equality, that takes into account distributional differences. Further, we provide a new approach to improve on the fairness metric during model training and demonstrate its efficacy in improving performance for a real-world product.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114496892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
Mapping Informal Settlements in Developing Countries using Machine Learning and Low Resolution Multi-spectral Data 利用机器学习和低分辨率多光谱数据绘制发展中国家非正式住区地图
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-03 DOI: 10.1145/3306618.3314253
Bradley Gram-Hansen, P. Helber, I. Varatharajan, F. Azam, Alejandro Coca-Castro, V. Kopačková, P. Bilinski
{"title":"Mapping Informal Settlements in Developing Countries using Machine Learning and Low Resolution Multi-spectral Data","authors":"Bradley Gram-Hansen, P. Helber, I. Varatharajan, F. Azam, Alejandro Coca-Castro, V. Kopačková, P. Bilinski","doi":"10.1145/3306618.3314253","DOIUrl":"https://doi.org/10.1145/3306618.3314253","url":null,"abstract":"Informal settlements are home to the most socially and economically vulnerable people on the planet. In order to deliver effective economic and social aid, non-government organizations (NGOs), such as the United Nations Children's Fund (UNICEF), require detailed maps of the locations of informal settlements. However, data regarding informal and formal settlements is primarily unavailable and if available is often incomplete. This is due, in part, to the cost and complexity of gathering data on a large scale. To address these challenges, we, in this work, provide three contributions. 1) A brand new machine learning dataset purposely developed for informal settlement detection. 2) We show that it is possible to detect informal settlements using freely available low-resolution (LR) data, in contrast to previous studies that use very-high resolution~(VHR) satellite and aerial imagery, something that is cost-prohibitive for NGOs. 3) We demonstrate two effective classification schemes on our curated data set, one that is cost-efficient for NGOs and another that is cost-prohibitive for NGOs, but has additional utility. We integrate these schemes into a semi-automated pipeline that converts either a LR or VHR satellite image into a binary map that encodes the locations of informal settlements.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130191587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Ethically Aligned Opportunistic Scheduling for Productive Laziness 道德一致的机会主义计划,有效的懒惰
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-02 DOI: 10.1145/3306618.3314240
Han Yu, C. Miao, Yongqing Zheng, Li-zhen Cui, Simon Fauvel, Cyril Leung
{"title":"Ethically Aligned Opportunistic Scheduling for Productive Laziness","authors":"Han Yu, C. Miao, Yongqing Zheng, Li-zhen Cui, Simon Fauvel, Cyril Leung","doi":"10.1145/3306618.3314240","DOIUrl":"https://doi.org/10.1145/3306618.3314240","url":null,"abstract":"In artificial intelligence (AI) mediated workforce management systems (e.g., crowdsourcing), long-term success depends on workers accomplishing tasks productively and resting well. This dual objective can be summarized by the concept of productive laziness. Existing scheduling approaches mostly focus on efficiency but overlook worker wellbeing through proper rest. In order to enable workforce management systems to follow the IEEE Ethically Aligned Design guidelines to prioritize worker wellbeing, we propose a distributed Computational Productive Laziness (CPL) approach in this paper. It intelligently recommends personalized work-rest schedules based on local data concerning a worker's capabilities and situational factors to incorporate opportunistic resting and achieve superlinear collective productivity without the need for explicit coordination messages. Extensive experiments based on a real-world dataset of over 5,000 workers demonstrate that CPL enables workers to spend 70% of the effort to complete 90% of the tasks on average, providing more ethically aligned scheduling than existing approaches.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126338918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信