基于子目标的不可靠智能决策支持系统解释

Devleena Das, Been Kim, S. Chernova
{"title":"基于子目标的不可靠智能决策支持系统解释","authors":"Devleena Das, Been Kim, S. Chernova","doi":"10.1145/3581641.3584055","DOIUrl":null,"url":null,"abstract":"Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce suboptimal output or fail to work altogether. The field of explainable AI (XAI) has sought to develop techniques that improve the interpretability of black-box systems. While most XAI work has focused on single-classification tasks, the subfield of explainable AI planning (XAIP) has sought to develop techniques that make sequential decision making AI systems explainable to domain experts. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always optimal. In this work, we examine novice user interactions with a non-robust IDS system – one that occasionally recommends suboptimal actions, and one that may become unavailable after users have become accustomed to its guidance. We introduce a new explanation type, subgoal-based explanations, for plan-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance in the presence of IDS recommendations, improve user ability to distinguish optimal and suboptimal IDS recommendations, and are preferred by users. Additionally, we demonstrate that subgoal-based explanations enable more robust user performance in the case of IDS failure, showing the significant benefit of training users for an underlying task with subgoal-based explanations.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems\",\"authors\":\"Devleena Das, Been Kim, S. Chernova\",\"doi\":\"10.1145/3581641.3584055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce suboptimal output or fail to work altogether. The field of explainable AI (XAI) has sought to develop techniques that improve the interpretability of black-box systems. While most XAI work has focused on single-classification tasks, the subfield of explainable AI planning (XAIP) has sought to develop techniques that make sequential decision making AI systems explainable to domain experts. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always optimal. In this work, we examine novice user interactions with a non-robust IDS system – one that occasionally recommends suboptimal actions, and one that may become unavailable after users have become accustomed to its guidance. We introduce a new explanation type, subgoal-based explanations, for plan-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance in the presence of IDS recommendations, improve user ability to distinguish optimal and suboptimal IDS recommendations, and are preferred by users. Additionally, we demonstrate that subgoal-based explanations enable more robust user performance in the case of IDS failure, showing the significant benefit of training users for an underlying task with subgoal-based explanations.\",\"PeriodicalId\":118159,\"journal\":{\"name\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"volume\":\"88 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581641.3584055\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581641.3584055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

智能决策支持(IDS)系统利用人工智能技术生成建议,指导人类用户完成任务的决策制定阶段。然而,一个关键的挑战是IDS系统并不完美,在复杂的现实场景中可能会产生次优输出或完全无法工作。可解释人工智能(XAI)领域一直在寻求开发提高黑箱系统可解释性的技术。虽然大多数XAI工作都集中在单一分类任务上,但可解释人工智能规划(XAIP)的子领域一直在寻求开发技术,使顺序决策人工智能系统对领域专家来说是可解释的。至关重要的是,以前将XAIP技术应用于IDS系统的工作假设计划者提出的计划总是最优的,因此作为决策支持推荐给用户的行动或计划总是最优的。在这项工作中,我们研究了新手用户与非健壮IDS系统的交互,该系统偶尔会推荐次优操作,并且在用户习惯了它的指导后可能变得不可用。我们为基于计划的IDS系统引入了一种新的解释类型,基于子目标的解释,它用关于推荐的操作将有助于实现的子目标的信息补充传统的IDS输出。我们证明了基于子目标的解释在存在IDS推荐的情况下可以改善用户任务性能,提高用户区分最优和次优IDS推荐的能力,并且受到用户的青睐。此外,我们还证明,在IDS失败的情况下,基于子目标的解释能够实现更强大的用户性能,这显示了使用基于子目标的解释培训用户完成底层任务的显著好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems
Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce suboptimal output or fail to work altogether. The field of explainable AI (XAI) has sought to develop techniques that improve the interpretability of black-box systems. While most XAI work has focused on single-classification tasks, the subfield of explainable AI planning (XAIP) has sought to develop techniques that make sequential decision making AI systems explainable to domain experts. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always optimal. In this work, we examine novice user interactions with a non-robust IDS system – one that occasionally recommends suboptimal actions, and one that may become unavailable after users have become accustomed to its guidance. We introduce a new explanation type, subgoal-based explanations, for plan-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance in the presence of IDS recommendations, improve user ability to distinguish optimal and suboptimal IDS recommendations, and are preferred by users. Additionally, we demonstrate that subgoal-based explanations enable more robust user performance in the case of IDS failure, showing the significant benefit of training users for an underlying task with subgoal-based explanations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信