实时自动解释在随机规划中的价值

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Claudia V. Goldman , Ronit Bustin , Wenyuan Qi , Zhengyu Xing , Rachel McPhearson-White , Sally Rogers
{"title":"实时自动解释在随机规划中的价值","authors":"Claudia V. Goldman ,&nbsp;Ronit Bustin ,&nbsp;Wenyuan Qi ,&nbsp;Zhengyu Xing ,&nbsp;Rachel McPhearson-White ,&nbsp;Sally Rogers","doi":"10.1016/j.artint.2025.104323","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, we are witnessing an increase in computation power and memory, leading to strong AI algorithms becoming applicable in areas affecting our daily lives. We focus on AI planning solutions for complex, real-life decision-making problems under uncertainty, such as autonomous driving. Human trust in such AI-based systems is essential for their acceptance and market penetration. Moreover, users need to establish appropriate levels of trust to benefit the most from these systems. Previous studies have motivated this work, showing that users can benefit from receiving (handcrafted) information about the reasoning of a stochastic AI planner, for example, controlling automated driving maneuvers. Our solution to automating these hand-crafted notifications with explainable AI algorithms, XAI, includes studying: (1) what explanations can be generated from an AI planning system, applied to a real-world problem, in real-time? What is that content that can be processed from a planner's reasoning that can help users understand and trust the system controlling a behavior they are experiencing? (2) when can this information be displayed? and (3) how shall we display this information to an end user? The value of these computed XAI notifications has been assessed through an online user study with 800 participants, experiencing simulated automated driving scenarios. Our results show that real time XAI notifications decrease significantly subjective misunderstanding of participants compared to those that received only a dynamic HMI display. Also, our XAI solution significantly increases the level of understanding of participants with prior ADAS experience and of participants that lack such experience but have non-negative prior trust to ADAS features. The level of trust significantly increases when XAI was provided to a more restricted set of the participants, including those over 60 years old, with prior ADAS experience and non-negative prior trust attitude to automated features.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"343 ","pages":"Article 104323"},"PeriodicalIF":5.1000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The value of real-time automated explanations in stochastic planning\",\"authors\":\"Claudia V. Goldman ,&nbsp;Ronit Bustin ,&nbsp;Wenyuan Qi ,&nbsp;Zhengyu Xing ,&nbsp;Rachel McPhearson-White ,&nbsp;Sally Rogers\",\"doi\":\"10.1016/j.artint.2025.104323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, we are witnessing an increase in computation power and memory, leading to strong AI algorithms becoming applicable in areas affecting our daily lives. We focus on AI planning solutions for complex, real-life decision-making problems under uncertainty, such as autonomous driving. Human trust in such AI-based systems is essential for their acceptance and market penetration. Moreover, users need to establish appropriate levels of trust to benefit the most from these systems. Previous studies have motivated this work, showing that users can benefit from receiving (handcrafted) information about the reasoning of a stochastic AI planner, for example, controlling automated driving maneuvers. Our solution to automating these hand-crafted notifications with explainable AI algorithms, XAI, includes studying: (1) what explanations can be generated from an AI planning system, applied to a real-world problem, in real-time? What is that content that can be processed from a planner's reasoning that can help users understand and trust the system controlling a behavior they are experiencing? (2) when can this information be displayed? and (3) how shall we display this information to an end user? The value of these computed XAI notifications has been assessed through an online user study with 800 participants, experiencing simulated automated driving scenarios. Our results show that real time XAI notifications decrease significantly subjective misunderstanding of participants compared to those that received only a dynamic HMI display. Also, our XAI solution significantly increases the level of understanding of participants with prior ADAS experience and of participants that lack such experience but have non-negative prior trust to ADAS features. The level of trust significantly increases when XAI was provided to a more restricted set of the participants, including those over 60 years old, with prior ADAS experience and non-negative prior trust attitude to automated features.</div></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"343 \",\"pages\":\"Article 104323\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0004370225000426\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370225000426","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

最近,我们看到计算能力和内存的增加,导致强大的人工智能算法在影响我们日常生活的领域变得适用。我们专注于人工智能规划解决方案,以解决不确定性下复杂的、现实生活中的决策问题,比如自动驾驶。人类对这种基于人工智能的系统的信任对它们的接受和市场渗透至关重要。此外,用户需要建立适当的信任级别,以便从这些系统中获益最多。之前的研究推动了这项工作,表明用户可以从接收(手工制作的)关于随机人工智能规划器推理的信息中受益,例如,控制自动驾驶机动。我们的解决方案是用可解释的人工智能算法XAI自动化这些手工制作的通知,包括研究:(1)应用于现实世界问题的人工智能规划系统可以实时生成哪些解释?什么内容可以从计划者的推理中处理,帮助用户理解和信任控制他们正在经历的行为的系统?(2)这个信息什么时候可以显示?(3)我们如何将这些信息显示给最终用户?通过对800名参与者的在线用户研究,评估了这些计算出来的XAI通知的价值,这些参与者体验了模拟的自动驾驶场景。我们的研究结果表明,与那些只收到动态HMI显示的参与者相比,实时XAI通知显著减少了参与者的主观误解。此外,我们的XAI解决方案显着提高了具有先前ADAS经验的参与者和缺乏此类经验但对ADAS功能具有非负面先前信任的参与者的理解水平。当XAI提供给更有限的一组参与者时,信任水平显着增加,包括那些60岁以上的参与者,具有先前的ADAS经验和对自动化功能的非负面先前信任态度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The value of real-time automated explanations in stochastic planning
Recently, we are witnessing an increase in computation power and memory, leading to strong AI algorithms becoming applicable in areas affecting our daily lives. We focus on AI planning solutions for complex, real-life decision-making problems under uncertainty, such as autonomous driving. Human trust in such AI-based systems is essential for their acceptance and market penetration. Moreover, users need to establish appropriate levels of trust to benefit the most from these systems. Previous studies have motivated this work, showing that users can benefit from receiving (handcrafted) information about the reasoning of a stochastic AI planner, for example, controlling automated driving maneuvers. Our solution to automating these hand-crafted notifications with explainable AI algorithms, XAI, includes studying: (1) what explanations can be generated from an AI planning system, applied to a real-world problem, in real-time? What is that content that can be processed from a planner's reasoning that can help users understand and trust the system controlling a behavior they are experiencing? (2) when can this information be displayed? and (3) how shall we display this information to an end user? The value of these computed XAI notifications has been assessed through an online user study with 800 participants, experiencing simulated automated driving scenarios. Our results show that real time XAI notifications decrease significantly subjective misunderstanding of participants compared to those that received only a dynamic HMI display. Also, our XAI solution significantly increases the level of understanding of participants with prior ADAS experience and of participants that lack such experience but have non-negative prior trust to ADAS features. The level of trust significantly increases when XAI was provided to a more restricted set of the participants, including those over 60 years old, with prior ADAS experience and non-negative prior trust attitude to automated features.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence
Artificial Intelligence 工程技术-计算机:人工智能
CiteScore
11.20
自引率
1.40%
发文量
118
审稿时长
8 months
期刊介绍: The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信