Daniel Omeiza , Raunak Bhattacharyya , Marina Jirotka , Nick Hawes , Lars Kunze
{"title":"A transparency paradox? Investigating the impact of explanation specificity and autonomous vehicle imperfect detection capabilities on passengers","authors":"Daniel Omeiza , Raunak Bhattacharyya , Marina Jirotka , Nick Hawes , Lars Kunze","doi":"10.1016/j.trf.2025.01.015","DOIUrl":null,"url":null,"abstract":"<div><div>Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety) that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that abstract explanations did not make passengers safer despite being vague enough to conceal all perception system detection errors compared to specific explanations having a minimal amount of exposed perception system detection errors. Anxiety levels increased when specific explanations revealed perception system detection errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. We advocate for explanation systems in autonomous vehicles (AV) that can adapt to different stakeholders' transparency needs.</div></div>","PeriodicalId":48355,"journal":{"name":"Transportation Research Part F-Traffic Psychology and Behaviour","volume":"109 ","pages":"Pages 1275-1292"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part F-Traffic Psychology and Behaviour","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1369847825000142","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety) that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that abstract explanations did not make passengers safer despite being vague enough to conceal all perception system detection errors compared to specific explanations having a minimal amount of exposed perception system detection errors. Anxiety levels increased when specific explanations revealed perception system detection errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. We advocate for explanation systems in autonomous vehicles (AV) that can adapt to different stakeholders' transparency needs.
期刊介绍:
Transportation Research Part F: Traffic Psychology and Behaviour focuses on the behavioural and psychological aspects of traffic and transport. The aim of the journal is to enhance theory development, improve the quality of empirical studies and to stimulate the application of research findings in practice. TRF provides a focus and a means of communication for the considerable amount of research activities that are now being carried out in this field. The journal provides a forum for transportation researchers, psychologists, ergonomists, engineers and policy-makers with an interest in traffic and transport psychology.