Explanations for AI: Computable or Not?

Niko Tsakalakis, L. Carmichael, Sophie Stalla-Bourdillon, L. Moreau, D. Huynh, Ayah Helal
{"title":"Explanations for AI: Computable or Not?","authors":"Niko Tsakalakis, L. Carmichael, Sophie Stalla-Bourdillon, L. Moreau, D. Huynh, Ayah Helal","doi":"10.1145/3394332.3402900","DOIUrl":null,"url":null,"abstract":"Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.","PeriodicalId":435721,"journal":{"name":"Companion Publication of the 12th ACM Conference on Web Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 12th ACM Conference on Web Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3394332.3402900","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.
人工智能的解释:可计算与否?
自动化决策继续在众多部门中用于各种目的。最终,一个“好的”解释不仅是人工智能系统的设计师和开发者的关注点,也是许多学科的关注点,包括法律、哲学、心理学、历史、社会学和人机交互。考虑到对人工智能产生兼容、有效和有效的解释需要高水平的批判性、跨学科思维和协作,因此这个领域对Web Science特别感兴趣。研讨会“人工智能的解释:可计算还是不可计算?”(exAI ' 20)旨在将研究人员、实践者和社会敏感决策的代表聚集在一起,交流思想、方法和挑战,作为对人工智能解释的跨学科讨论的一部分。希望这次研讨会能够建立一个跨部门、多学科的国际网络,专注于解释人工智能,并制定一个推动这项工作向前发展的议程。exAI ' 20将举行两次立场文件会议,小组成员和研讨会与会者将在互动对话中讨论以下关键问题:会议希望通过在每篇论文后提供互动讨论的时间,激发关于人工智能的解释是否可计算的热烈辩论。讨论将揭示支持和反对与社会敏感决策相关的人工智能解释的可计算性的关键论据。来自项目团队的介绍性主题演讲(出处驱动和基于法律的自动决策解释)将介绍人工智能解释的用例、场景和实践经验。主题演讲将作为论文会议讨论的起点,讨论所使用的基本原理、技术和/或组织措施;并且,从不同的角度进行描述——例如,软件设计师、实现者和那些受自动化决策影响的人。在本次研讨会结束时,与会者将对人工智能解释的批评和优势有一个很好的了解,包括解释可以或应该在多大程度上可计算。他们将有机会参与并告知有关人工智能可解释性的复杂主题的讨论,例如解释的法律要求,数据伦理在多大程度上可能推动对人工智能的解释,对人工智能决策和人工决策解释的异同的反思,以及什么是“好的”解释,以及对社会敏感决策的解释的词源。exAI ' 20得到工程与物理科学研究委员会[批准号EP/S027238/1]的支持。我们要感谢Web Science 2019会议的组织者同意举办我们的研讨会并给予他们的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信