Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities 人工智能能胜任这份工作吗?一个映射劳动和人工智能强度的双向模型
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-23 DOI: 10.1145/3375627.3375831
Fernando Martínez-Plumed, Songül Tolan, Annarosa Pesole, J. Hernández-Orallo, Enrique Fernández-Macías, Emilia Gómez
{"title":"Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities","authors":"Fernando Martínez-Plumed, Songül Tolan, Annarosa Pesole, J. Hernández-Orallo, Enrique Fernández-Macías, Emilia Gómez","doi":"10.1145/3375627.3375831","DOIUrl":"https://doi.org/10.1145/3375627.3375831","url":null,"abstract":"In this paper we present a setting for examining the relation be-tween the distribution of research intensity in AI research and the relevance for a range of work tasks (and occupations) in current and simulated scenarios. We perform a mapping between labourand AI using a set of cognitive abilities as an intermediate layer. This setting favours a two-way interpretation to analyse (1) what impact current or simulated AI research activity has or would have on labour-related tasks and occupations, and (2) what areas of AI research activity would be responsible for a desired or undesired effect on specific labour tasks and occupations. Concretely, in our analysis we map 59 generic labour-related tasks from several worker surveys and databases to 14 cognitive abilities from the cognitive science literature, and these to a comprehensive list of 328 AI benchmarks used to evaluate progress in AI techniques. We provide this model and its implementation as a tool for simulations. We also show the effectiveness of our setting with some illustrative examples.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"91 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72632890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Activism by the AI Community: Analysing Recent Achievements and Future Prospects 人工智能社区的行动主义:分析最近的成就和未来前景
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-17 DOI: 10.1145/3375627.3375814
Haydn Belfield
{"title":"Activism by the AI Community: Analysing Recent Achievements and Future Prospects","authors":"Haydn Belfield","doi":"10.1145/3375627.3375814","DOIUrl":"https://doi.org/10.1145/3375627.3375814","url":null,"abstract":"The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'. Both are crucial to the future of AI activism and worthy of sustained attention.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77386214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Monitoring Misuse for Accountable 'Artificial Intelligence as a Service' 监测滥用“人工智能即服务”的责任
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375873
S. A. Javadi, Richard Cloete, Jennifer Cobbe, M. S. Lee, Jatinder Singh
{"title":"Monitoring Misuse for Accountable 'Artificial Intelligence as a Service'","authors":"S. A. Javadi, Richard Cloete, Jennifer Cobbe, M. S. Lee, Jatinder Singh","doi":"10.1145/3375627.3375873","DOIUrl":"https://doi.org/10.1145/3375627.3375873","url":null,"abstract":"AI is increasingly being offered 'as a service' (AIaaS). This entails service providers offering customers access to pre-built AI models and services, for tasks such as object recognition, text translation, text-to-voice conversion, and facial recognition, to name a few. The offerings enable customers to easily integrate a range of powerful AI-driven capabilities into their applications. Customers access these models through the provider's APIs, sending particular data to which models are applied, the results of which returned. However, there are many situations in which the use of AI can be problematic. AIaaS services typically represent generic functionality, available 'at a click'. Providers may therefore, for reasons of reputation or responsibility, seek to ensure that the AIaaS services they offer are being used by customers for 'appropriate' purposes. This paper introduces and explores the concept whereby AIaaS providers uncover situations of possible service misuse by their customers. Illustrated through topical examples, we consider the technical usage patterns that could signal situations warranting scrutiny, and raise some of the legal and technical challenges of monitoring for misuse. In all, by introducing this concept, we indicate a potential area for further inquiry from a range of perspectives.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82302444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Social and Governance Implications of Improved Data Efficiency 提高数据效率对社会和治理的影响
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375863
Aaron David Tucker, Markus Anderljung, A. Dafoe
{"title":"Social and Governance Implications of Improved Data Efficiency","authors":"Aaron David Tucker, Markus Anderljung, A. Dafoe","doi":"10.1145/3375627.3375863","DOIUrl":"https://doi.org/10.1145/3375627.3375863","url":null,"abstract":"Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency -- as more actors gain access to any level of capability -- the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the \"AI production function\", will be key to understanding the development of the AI industry and its societal impacts.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88142007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Robot Rights?: Let's Talk about Human Welfare Instead 机器人的权利吗?我们来谈谈人类的福利吧
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375855
A. Birhane, J. V. Dijk
{"title":"Robot Rights?: Let's Talk about Human Welfare Instead","authors":"A. Birhane, J. V. Dijk","doi":"10.1145/3375627.3375855","DOIUrl":"https://doi.org/10.1145/3375627.3375855","url":null,"abstract":"The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots 'rights', but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the 'robots rights' debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society's least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79220104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making 人工智能:衡量人工智能“评估”对道德决策的影响
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-13 DOI: 10.1145/3375627.3375870
Lok Chan, Kenzie Doyle, Duncan C. McElfresh, Vincent Conitzer, John P. Dickerson, Jana Schaich Borg, Walter Sinnott-Armstrong
{"title":"Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making","authors":"Lok Chan, Kenzie Doyle, Duncan C. McElfresh, Vincent Conitzer, John P. Dickerson, Jana Schaich Borg, Walter Sinnott-Armstrong","doi":"10.1145/3375627.3375870","DOIUrl":"https://doi.org/10.1145/3375627.3375870","url":null,"abstract":"Given AI's growing role in modeling and improving decision-making, how and when to present users with feedback is an urgent topic to address. We empirically examined the effect of feedback from false AI on moral decision-making about donor kidney allocation. We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI, even if the feedback is entirely random. We also discovered different effects between assessments presented as being from human experts and assessments presented as being from AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77187537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 超越近期和长期:对人工智能伦理和社会的研究重点进行更清晰的描述
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-13 DOI: 10.1145/3375627.3375803
Carina E. A. Prunkl, Jess Whittlestone
{"title":"Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society","authors":"Carina E. A. Prunkl, Jess Whittlestone","doi":"10.1145/3375627.3375803","DOIUrl":"https://doi.org/10.1145/3375627.3375803","url":null,"abstract":"One way of carving up the broad 'AI ethics and society' research space that has emerged in recent years is to distinguish between 'near-term' and 'long-term' research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78720879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Should Artificial Intelligence Governance be Centralised?: Design Lessons from History 人工智能治理应该集中吗?:历史上的设计教训
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-10 DOI: 10.1145/3375627.3375857
P. Cihon, M. Maas, Luke Kemp
{"title":"Should Artificial Intelligence Governance be Centralised?: Design Lessons from History","authors":"P. Cihon, M. Maas, Luke Kemp","doi":"10.1145/3375627.3375857","DOIUrl":"https://doi.org/10.1145/3375627.3375857","url":null,"abstract":"Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"149 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77472952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification 研究人脸识别训练数据中包含对个体人脸识别的影响
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-09 DOI: 10.1145/3375627.3375875
Chris Dulhanty, A. Wong
{"title":"Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification","authors":"Chris Dulhanty, A. Wong","doi":"10.1145/3375627.3375875","DOIUrl":"https://doi.org/10.1145/3375627.3375875","url":null,"abstract":"Modern face recognition systems leverage datasets containing images of hundreds of thousands of specific individuals' faces to train deep convolutional neural networks to learn an embedding space that maps an arbitrary individual's face to a vector representation of their identity. The performance of a face recognition system in face verification (1:1) and face identification (1:N) tasks is directly related to the ability of an embedding space to discriminate between identities. Recently, there has been significant public scrutiny into the source and privacy implications of large-scale face recognition training datasets such as MS-Celeb-1M and MegaFace, as many people are uncomfortable with their face being used to train dual-use technologies that can enable mass surveillance. However, the impact of an individual's inclusion in training data on a derived system's ability to recognize them has not previously been studied. In this work, we audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images. We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present. This modest difference in accuracy demonstrates that face recognition systems using deep learning work better for individuals they are trained on, which has serious privacy implications when one considers all major open source face recognition training datasets do not obtain informed consent from individuals during their collection.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74069231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Algorithmic Fairness from a Non-ideal Perspective 非理想视角下的算法公平
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-01-08 DOI: 10.1145/3375627.3375828
S. Fazelpour, Zachary Chase Lipton
{"title":"Algorithmic Fairness from a Non-ideal Perspective","authors":"S. Fazelpour, Zachary Chase Lipton","doi":"10.1145/3375627.3375828","DOIUrl":"https://doi.org/10.1145/3375627.3375828","url":null,"abstract":"Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In the hopes of mitigating these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might hope to observe in a fair world, offering a variety of algorithms that attempt to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and ideal worlds. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of their actions, naive applications of ideal thinking can lead to misguided policies. In this paper, we demonstrate a connection between the recent literature on fair machine learning and the ideal approach in political philosophy, and show that some recently uncovered shortcomings in proposed algorithms reflect broader troubles faced by the ideal approach. We work this analysis through for different formulations of fairness and conclude with a critical discussion of real-world impacts and directions for new research.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83771179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信