VizXP: A Visualization Framework for Conveying Explanations to Users in Model Reconciliation Problems

Ashwin Kumar, S. Vasileiou, Melanie Bancilhon, Alvitta Ottley, W. Yeoh
{"title":"VizXP: A Visualization Framework for Conveying Explanations to Users in Model Reconciliation Problems","authors":"Ashwin Kumar, S. Vasileiou, Melanie Bancilhon, Alvitta Ottley, W. Yeoh","doi":"10.1609/icaps.v32i1.19860","DOIUrl":null,"url":null,"abstract":"Advancements in explanation generation for automated planning algorithms have moved us a step closer towards realizing the full potential of human-AI collaboration in real-world planning applications. Within this context, a framework called model reconciliation has gained a lot of traction, mostly due to its deep connection with a popular theory in human psychology, known as the theory of mind. Existing literature in this setting, however, has mostly been constrained to algorithmic contributions for generating explanations. To the best of our knowledge, there has been very little work on how to effectively convey such explanations to human users, a critical component in human-AI collaboration systems. In this paper, we set out to explore to what extent visualizations are an effective candidate for conveying explanations in a way that can be easily understood. Particularly, by drawing inspiration from work done in visualization systems for classical planning, we propose a visualization framework for visualizing explanations generated from model reconciliation algorithms. We demonstrate the efficacy of our proposed system in a comprehensive user study, where we compare our framework against a text-based baseline for two types of explanations – domain-based and problem-based explanations. Results from the user study show that users, on average, understood explanations better when they are conveyed via our visualization system compared to when they are conveyed via a text-based baseline.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/icaps.v32i1.19860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Advancements in explanation generation for automated planning algorithms have moved us a step closer towards realizing the full potential of human-AI collaboration in real-world planning applications. Within this context, a framework called model reconciliation has gained a lot of traction, mostly due to its deep connection with a popular theory in human psychology, known as the theory of mind. Existing literature in this setting, however, has mostly been constrained to algorithmic contributions for generating explanations. To the best of our knowledge, there has been very little work on how to effectively convey such explanations to human users, a critical component in human-AI collaboration systems. In this paper, we set out to explore to what extent visualizations are an effective candidate for conveying explanations in a way that can be easily understood. Particularly, by drawing inspiration from work done in visualization systems for classical planning, we propose a visualization framework for visualizing explanations generated from model reconciliation algorithms. We demonstrate the efficacy of our proposed system in a comprehensive user study, where we compare our framework against a text-based baseline for two types of explanations – domain-based and problem-based explanations. Results from the user study show that users, on average, understood explanations better when they are conveyed via our visualization system compared to when they are conveyed via a text-based baseline.
在模型协调问题中向用户传达解释的可视化框架
在自动规划算法的解释生成方面的进步,使我们更接近实现人类与人工智能在现实世界规划应用中协作的全部潜力。在这种背景下,一个被称为模型调和的框架获得了很大的吸引力,主要是因为它与人类心理学中一个流行的理论——心智理论——有很深的联系。然而,在这种情况下,现有的文献大多局限于生成解释的算法贡献。据我们所知,关于如何有效地向人类用户传达这些解释的工作很少,而这是人类与人工智能协作系统的关键组成部分。在本文中,我们着手探索可视化在多大程度上是一种以易于理解的方式传达解释的有效候选。特别地,通过从经典规划可视化系统的工作中汲取灵感,我们提出了一个可视化框架,用于可视化由模型调和算法生成的解释。我们在一项全面的用户研究中证明了我们提出的系统的有效性,我们将我们的框架与基于文本的基线进行了比较,以获得两种类型的解释——基于领域的解释和基于问题的解释。用户研究的结果表明,平均而言,通过我们的可视化系统传达的解释比通过基于文本的基线传达的解释更能让用户理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信