Interaction design for explainable AI: workshop proposal

Prashan Madumal, Ronal Singh, Joshua Newn, F. Vetere
{"title":"Interaction design for explainable AI: workshop proposal","authors":"Prashan Madumal, Ronal Singh, Joshua Newn, F. Vetere","doi":"10.1145/3292147.3293450","DOIUrl":null,"url":null,"abstract":"As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the 'black-box' nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers' intuition of what constitutes a 'good' explanation while ignoring the interaction and the human aspect. This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.","PeriodicalId":309502,"journal":{"name":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th Australian Conference on Computer-Human Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3292147.3293450","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the 'black-box' nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers' intuition of what constitutes a 'good' explanation while ignoring the interaction and the human aspect. This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.
可解释AI的交互设计:研讨会提案
随着人工智能(AI)系统变得越来越复杂和无处不在,这些系统将负责做出直接影响个人和整个社会的决策。出于道德考虑和信任,这样的决定需要被证明是合理的,但由于许多人工智能模型采用的“黑箱”性质,实现这一目标变得很困难。可解释的AI (XAI)可以通过向用户解释其操作、决策和系统行为来潜在地解决这个问题。然而,XAI中的许多研究都是在真空中完成的,仅使用研究人员对“好”解释的直觉,而忽略了交互和人的方面。本次研讨会邀请HCI社区和相关领域的研究人员就基于交互的以人为中心的XAI方法进行讨论,并就XAI中的交互设计挑战进行讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信