Human-Centric explanations for users in automated Vehicles: A systematic review

IF 5.7 1区 工程技术 Q1 ERGONOMICS
Zishuo Zhu , Xiaomeng Li , Patricia Delhomme , Ronald Schroeter , Sebastien Glaser , Andry Rakotonirainy
{"title":"Human-Centric explanations for users in automated Vehicles: A systematic review","authors":"Zishuo Zhu ,&nbsp;Xiaomeng Li ,&nbsp;Patricia Delhomme ,&nbsp;Ronald Schroeter ,&nbsp;Sebastien Glaser ,&nbsp;Andry Rakotonirainy","doi":"10.1016/j.aap.2025.108152","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety–critical scenarios. However, current literature offers limited insight into how different types of explanations impact drivers in diverse scenarios and the methods for evaluating their quality. This paper systematically reviews what, when and how to provide human-centric explanations in AV contexts.</div></div><div><h3>Methods</h3><div>The systematic review followed PRISMA guidelines, and covered five databases—Scopus, Web of Science, IEEE Xplore, TRID, and Semantic Scholar—from 2000 to April 2024. Out of 266 identified articles, 59 met the inclusion criteria.</div></div><div><h3>Results</h3><div>Providing a detailed content explanation following AV’s driving actions in real time does not always increase user trust and acceptance. Explanations that clarify the reasoning behind actions are more effective than those merely describing actions. Providing explanations before action is recommended, though the optimal timing remains uncertain. Multimodal explanations (visual and audio) are most effective when each mode conveys unique information; otherwise, visual-only explanations are preferred. The narrative perspective (first-person vs. third-person) also impacts user trust differently across scenarios.</div></div><div><h3>Conclusions</h3><div>The review underscores the importance of tailoring human-centric explanations to specific driving contexts. Future research should address explanation length, timing, and modality coordination and focus on real-world studies to enhance generalisability. These insights are vital for advancing the research of human-centric explanations in AV systems and fostering safer, more trustworthy human-vehicle interactions, ultimately reducing the risk of inappropriate reactions, delayed responses, or user error in traffic settings.</div></div>","PeriodicalId":6926,"journal":{"name":"Accident; analysis and prevention","volume":"220 ","pages":"Article 108152"},"PeriodicalIF":5.7000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accident; analysis and prevention","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0001457525002386","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety–critical scenarios. However, current literature offers limited insight into how different types of explanations impact drivers in diverse scenarios and the methods for evaluating their quality. This paper systematically reviews what, when and how to provide human-centric explanations in AV contexts.

Methods

The systematic review followed PRISMA guidelines, and covered five databases—Scopus, Web of Science, IEEE Xplore, TRID, and Semantic Scholar—from 2000 to April 2024. Out of 266 identified articles, 59 met the inclusion criteria.

Results

Providing a detailed content explanation following AV’s driving actions in real time does not always increase user trust and acceptance. Explanations that clarify the reasoning behind actions are more effective than those merely describing actions. Providing explanations before action is recommended, though the optimal timing remains uncertain. Multimodal explanations (visual and audio) are most effective when each mode conveys unique information; otherwise, visual-only explanations are preferred. The narrative perspective (first-person vs. third-person) also impacts user trust differently across scenarios.

Conclusions

The review underscores the importance of tailoring human-centric explanations to specific driving contexts. Future research should address explanation length, timing, and modality coordination and focus on real-world studies to enhance generalisability. These insights are vital for advancing the research of human-centric explanations in AV systems and fostering safer, more trustworthy human-vehicle interactions, ultimately reducing the risk of inappropriate reactions, delayed responses, or user error in traffic settings.
以人为中心的自动驾驶汽车用户解释:系统回顾
自动驾驶汽车(AVs)的决策过程可能会让用户感到困惑,并降低信任,这凸显了对清晰、以人为本的解释的需求。这样的解释可以帮助用户理解自动驾驶汽车的动作,促进控制的平稳过渡,提高透明度、接受度和信任度。至关重要的是,这种解释可以提高态势感知能力,支持及时、适当的人类反应,从而减少误用、意外的自动化决策和安全关键场景中延迟反应的风险。然而,目前的文献对不同类型的解释如何影响不同情景下的驱动因素以及评估其质量的方法提供的见解有限。本文系统地回顾了在AV环境中什么、何时以及如何提供以人为中心的解释。方法系统综述遵循PRISMA指南,检索了scopus、Web of Science、IEEE explore、TRID和Semantic scholar 5个数据库,检索时间为2000年至2024年4月。在确定的266篇文章中,有59篇符合纳入标准。结果在自动驾驶汽车的驾驶行为后实时提供详细的内容解释并不一定能增加用户的信任和接受度。澄清行为背后原因的解释比仅仅描述行为的解释更有效。在建议采取行动之前提供解释,尽管最佳时机仍不确定。当每种模式传达独特信息时,多模式解释(视觉和音频)是最有效的;否则,最好只使用视觉解释。叙述视角(第一人称vs.第三人称)也会影响不同场景下的用户信任。这篇综述强调了将以人为中心的解释调整到特定的驾驶环境中的重要性。未来的研究应解决解释的长度、时间和模态协调,并将重点放在现实世界的研究上,以提高普遍性。这些见解对于推进自动驾驶系统中以人为中心的解释研究,促进更安全、更值得信赖的人车互动,最终降低交通环境中不适当反应、延迟反应或用户错误的风险至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
11.90
自引率
16.90%
发文量
264
审稿时长
48 days
期刊介绍: Accident Analysis & Prevention provides wide coverage of the general areas relating to accidental injury and damage, including the pre-injury and immediate post-injury phases. Published papers deal with medical, legal, economic, educational, behavioral, theoretical or empirical aspects of transportation accidents, as well as with accidents at other sites. Selected topics within the scope of the Journal may include: studies of human, environmental and vehicular factors influencing the occurrence, type and severity of accidents and injury; the design, implementation and evaluation of countermeasures; biomechanics of impact and human tolerance limits to injury; modelling and statistical analysis of accident data; policy, planning and decision-making in safety.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信