Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
A Framework for Technically- and Morally-Sound AI 技术和道德健全的人工智能框架
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314320
Duncan C. McElfresh
{"title":"A Framework for Technically- and Morally-Sound AI","authors":"Duncan C. McElfresh","doi":"10.1145/3306618.3314320","DOIUrl":"https://doi.org/10.1145/3306618.3314320","url":null,"abstract":"Artificial Intelligence (AI) ethics is by no means a new discipline; thinkers like Asimov and Philip K Dick laid the foundations of this field decades ago. Both then and today, popular dilemmas in AI ethics largely focus on artificial consciousness, artificial general intelligence, autonomous weapons, and some version of the trolley problem. While these thought experiments may prove useful in the future, modern AI applications that are in use today raise ethical dilemmas that require urgent resolution. Public outcry in response to AI in health care, criminal justice, and employment highlight the urgency of the matter. These real and imminent ethical challenges posed by AI form the basis of my dissertation research. In particular, I focus on domains where AI is necessary or inevitable -- such as kidney exchange and medical image classification -- and ethical challenges are unavoidable.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116880421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes 十字路口的赔偿:自动驾驶汽车和其他受害者赔偿方案
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314249
Tracy Hresko Pearl
{"title":"Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes","authors":"Tracy Hresko Pearl","doi":"10.1145/3306618.3314249","DOIUrl":"https://doi.org/10.1145/3306618.3314249","url":null,"abstract":"Over the last five years, a small but growing number of vehicle accidents involving fully or partially autonomous vehicles have raised a new and profoundly novel legal issue: who should be liable (if anyone) and how victims should be compensated (if at all) when a vehicle controlled by an algorithm rather than a human driver causes injury. The answer to this question has implications far beyond the resolution of individual autonomous vehicle crash cases. Whether the American legal system is capable of handling these cases fairly and efficiently implicates the likelihood that (a) consumers will adopt autonomous vehicles, and (b) the rate at which they will (or will not) do so. These implications should concern law and policy makers immensely. If autonomous cars stand to drastically reduce the number of fatalities and injuries on U.S. roadways-and virtually every scholar believes that they will-getting the adjudication and compensation aspect of autonomous vehicle injuries \"wrong,\" so to speak, risks stymieing adoption of this technology and leaving more Americans at risk of dying at the hands of human drivers.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123254946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On Serving Two Masters: Directing Critical Technical Practice towards Human-Compatibility in AI 论服务两位大师:引导人工智能的关键技术实践走向人性化
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314325
McKane Andrus
{"title":"On Serving Two Masters: Directing Critical Technical Practice towards Human-Compatibility in AI","authors":"McKane Andrus","doi":"10.1145/3306618.3314325","DOIUrl":"https://doi.org/10.1145/3306618.3314325","url":null,"abstract":"In this project I have worked towards a method for critical, socially aligned research in Artificial Intelligence by merging the analysis of conceptual commitments in technical work, discourse analysis, and critical technical practice. While the goal of critical technical practice as proposed by [1] is to overcome technical impasses, I explore an alternative use case - ensuring that technical research is aligned with social values. In the design of AI systems, we generally start with a technical formulation of a problem and then attempt to build a system that addresses that problem. Critical technical practice tells us that this technical formulation is always founded upon the discipline's core discourse and ontology, and that difficulty in solving a technical problem might just result from inconsistencies and faults in those core attributes. What I hope to show with this project is that, even when a technical problem seems solvable, critical technical practice can and should be used to ensure the human-compatibility of the technical research.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128648473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Appropriate Responses to Inappropriate Robot Commands 对不适当的机器人命令产生适当的反应
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314306
R. Jackson
{"title":"Generating Appropriate Responses to Inappropriate Robot Commands","authors":"R. Jackson","doi":"10.1145/3306618.3314306","DOIUrl":"https://doi.org/10.1145/3306618.3314306","url":null,"abstract":"This paper describes early work at the intersection of robot ethics and natural language generation investigating two overarching questions: (1) how might current language generation algorithms generate utterances with unintended implications or otherwise accidentally alter the ecosystem of human norms, and (2) how can we design future language systems such that they purposefully influence the human normative ecosystem as productively as possible.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128717013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speaking on Behalf of: Representation, Delegation, and Authority in Computational Text Analysis 代表:计算文本分析中的表示、授权和权威
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314292
E. Baumer, M. McGee
{"title":"Speaking on Behalf of: Representation, Delegation, and Authority in Computational Text Analysis","authors":"E. Baumer, M. McGee","doi":"10.1145/3306618.3314292","DOIUrl":"https://doi.org/10.1145/3306618.3314292","url":null,"abstract":"Computational tools can often facilitate human work by rapidly summarizing large amounts of data, especially text. Doing so delegates to such models some measure of authority to speak on behalf of those people whose data are being analyzed. This paper considers the consequences of such delegation. It draws on sociological accounts of representation and translation to examine one particular case: the application of topic modeling to blogs written by parents of children on the autism spectrum. In doing so, the paper illustrates the kinds of statements that topic models, and other computational techniques, can make on behalf of people. It also articulates some of the potential consequences of such statements. The paper concludes by offering several suggestions about how to address potential harms that can occur when computational models speak on behalf of someone.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129027538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Costs and Benefits of Fair Representation Learning 公平代表学习的成本与收益
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3317964
D. McNamara, Cheng Soon Ong, R. C. Williamson
{"title":"Costs and Benefits of Fair Representation Learning","authors":"D. McNamara, Cheng Soon Ong, R. C. Williamson","doi":"10.1145/3306618.3317964","DOIUrl":"https://doi.org/10.1145/3306618.3317964","url":null,"abstract":"Machine learning algorithms are increasingly used to make or support important decisions about people's lives. This has led to interest in the problem of fair classification, which involves learning to make decisions that are non-discriminatory with respect to a sensitive variable such as race or gender. Several methods have been proposed to solve this problem, including fair representation learning, which cleans the input data used by the algorithm to remove information about the sensitive variable. We show that using fair representation learning as an intermediate step in fair classification incurs a cost compared to directly solving the problem, which we refer to as thecost of mistrust. We show that fair representation learning in fact addresses a different problem, which is of interest when the data user is not trusted to access the sensitive variable. We quantify the benefits of fair representation learning, by showing that any subsequent use of the cleaned data will not be too unfair. The benefits we identify result from restricting the decisions of adversarial data users, while the costs are due to applying those same restrictions to other data users.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124666932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Toward Design and Evaluation Framework for Interpretable Machine Learning Systems 面向可解释机器学习系统的设计与评估框架
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314322
Sina Mohseni
{"title":"Toward Design and Evaluation Framework for Interpretable Machine Learning Systems","authors":"Sina Mohseni","doi":"10.1145/3306618.3314322","DOIUrl":"https://doi.org/10.1145/3306618.3314322","url":null,"abstract":"The need for interpretable and accountable intelligent system gets sensible as artificial intelligence plays more role in human life. Explainable artificial intelligence systems can be a solution by self-explaining the reasoning behind the decisions and predictions of the intelligent system. My research supports the design and evaluation methods and interpretable machine learning systems and leverages knowledge and experience in the fields of machine learning, human-computer interactions, and data visualization. My research objectives are to present a design and evaluation framework for explainable artificial intelligence systems, propose new methods and metrics to better evaluate the benefits of transparent machine learning systems, and apply interpretability methods for model reliability verification.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121198007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Regulating Lethal and Harmful Autonomy: Drafting a Protocol VI of the Convention on Certain Conventional Weapons 管制致命和有害的自主性:起草《某些常规武器公约》第六号议定书
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314295
Sean Welsh
{"title":"Regulating Lethal and Harmful Autonomy: Drafting a Protocol VI of the Convention on Certain Conventional Weapons","authors":"Sean Welsh","doi":"10.1145/3306618.3314295","DOIUrl":"https://doi.org/10.1145/3306618.3314295","url":null,"abstract":"This short paper provides two partial drafts for a Protocol VI that might be added to the existing five Protocols of the Convention on Certain Conventional Weapons (CCW) to regulate \"lethal autonomous weapons systems\" (LAWS). Draft A sets the line of tolerance at a \"human in the loop\" between the critical functions of select and engage. Draft B sets the line of tolerance at a human in the \"wider loop\" that includes the critical function of defining target classes as well as select and engage. Draft A represents an interpretation of what NGOs such as the Campaign to Stop Killer Robots are seeking to get enacted. Draft B is a more cautious draft based on the Dutch concept of \"meaningful human control in the wider loop\" that does not seek to ban any system that currently exists. Such a draft may be more likely to achieve the consensus required by the UN CCW process. A list of weapons banned by both drafts is provided along with the rationale for each draft. The drafts are intended to stimulate debate on the precise form a binding instrument on LAWS would take and on what LAWS (if any) should be banned and why.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128176841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enabling Effective Transparency: Towards User-Centric Intelligent Systems 实现有效的透明度:走向以用户为中心的智能系统
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314317
Aaron Springer
{"title":"Enabling Effective Transparency: Towards User-Centric Intelligent Systems","authors":"Aaron Springer","doi":"10.1145/3306618.3314317","DOIUrl":"https://doi.org/10.1145/3306618.3314317","url":null,"abstract":"Much of the current research in transparency and explainability is highly technical and focuses on how to derive explanations from models and algorithms. Less thought is being given to how users actually want to receive transparency and explanations from intelligent systems. My work tackles transparency and explainability from a user-centric perspective. I examine why transparency is desirable by showing that users may be susceptible to deception from intelligent systems. I demonstrate when users want transparency. Finally, my work begins to uncover how users want transparency conveyed. This body of work intends to create a path for designing transparency that puts user needs first rather than creating transparency as a convenient afterthought of model selection.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114536959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fairness Criteria for Face Recognition Applications 人脸识别应用公平性标准
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2019-01-27 DOI: 10.1145/3306618.3314308
F. Michalsky
{"title":"Fairness Criteria for Face Recognition Applications","authors":"F. Michalsky","doi":"10.1145/3306618.3314308","DOIUrl":"https://doi.org/10.1145/3306618.3314308","url":null,"abstract":"Nowadays, machine learning algorithms play an important role in our daily lives and it is important to ensure their fairness and transparency. A number of methodologies for evaluating machine learning fairness have been introduced in the literature. In this research we propose a systematic confidence evaluation approach to measure fairness discrepancies of our deep learning architecture for image recognition using UTKFace database.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128282311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信