Meaningful human control of partially automated driving systems: Insights from interviews with Tesla users

IF 3.5 2区 工程技术 Q1 PSYCHOLOGY, APPLIED
Lucas Elbert Suryana , Sina Nordhoff , Simeon Calvert , Arkady Zgonnikov , Bart van Arem
{"title":"Meaningful human control of partially automated driving systems: Insights from interviews with Tesla users","authors":"Lucas Elbert Suryana ,&nbsp;Sina Nordhoff ,&nbsp;Simeon Calvert ,&nbsp;Arkady Zgonnikov ,&nbsp;Bart van Arem","doi":"10.1016/j.trf.2025.04.026","DOIUrl":null,"url":null,"abstract":"<div><div>Partially automated driving systems are designed to perform specific driving tasks—such as steering, accelerating, and braking—while still requiring human drivers to monitor the environment and intervene when necessary. This shift of driving responsibilities from human drivers to automated systems raises concerns about accountability, particularly in scenarios involving unexpected events. To address these concerns, the concept of meaningful human control (MHC) has been proposed. MHC emphasises the importance of humans retaining oversight and responsibility for decisions made by automated systems. Despite extensive theoretical discussion of MHC in driving automation, there is limited empirical research on how real-world partially automated systems align with MHC principles. This study offers two main contributions: (1) an empirical evaluation of MHC in partially automated driving, based on 103 semi-structured interviews with users of Tesla's Autopilot and Full Self-Driving (FSD) Beta systems; and (2) a methodological framework for assessing MHC through qualitative interview data. We operationalise the previously proposed tracking and tracing conditions of MHC using a set of evaluation criteria to determine whether these systems support meaningful human control in practice. Our findings indicate that several factors influence the degree to which MHC is achieved. Failures in tracking—where drivers' expectations regarding system safety are not adequately met—arise from technological limitations, susceptibility to environmental conditions (e.g., adverse weather or inadequate infrastructure), and discrepancies between technical performance and user satisfaction. Tracing performance—the ability to clearly assign responsibility—is affected by inconsistent adherence to safety protocols, varying levels of driver confidence, and the specific driving mode in use (e.g., Autopilot versus FSD Beta). These findings contribute to ongoing efforts to design partially automated driving systems that more effectively support meaningful human control and promote more appropriate use of automation.</div></div>","PeriodicalId":48355,"journal":{"name":"Transportation Research Part F-Traffic Psychology and Behaviour","volume":"113 ","pages":"Pages 213-236"},"PeriodicalIF":3.5000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part F-Traffic Psychology and Behaviour","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1369847825001524","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Partially automated driving systems are designed to perform specific driving tasks—such as steering, accelerating, and braking—while still requiring human drivers to monitor the environment and intervene when necessary. This shift of driving responsibilities from human drivers to automated systems raises concerns about accountability, particularly in scenarios involving unexpected events. To address these concerns, the concept of meaningful human control (MHC) has been proposed. MHC emphasises the importance of humans retaining oversight and responsibility for decisions made by automated systems. Despite extensive theoretical discussion of MHC in driving automation, there is limited empirical research on how real-world partially automated systems align with MHC principles. This study offers two main contributions: (1) an empirical evaluation of MHC in partially automated driving, based on 103 semi-structured interviews with users of Tesla's Autopilot and Full Self-Driving (FSD) Beta systems; and (2) a methodological framework for assessing MHC through qualitative interview data. We operationalise the previously proposed tracking and tracing conditions of MHC using a set of evaluation criteria to determine whether these systems support meaningful human control in practice. Our findings indicate that several factors influence the degree to which MHC is achieved. Failures in tracking—where drivers' expectations regarding system safety are not adequately met—arise from technological limitations, susceptibility to environmental conditions (e.g., adverse weather or inadequate infrastructure), and discrepancies between technical performance and user satisfaction. Tracing performance—the ability to clearly assign responsibility—is affected by inconsistent adherence to safety protocols, varying levels of driver confidence, and the specific driving mode in use (e.g., Autopilot versus FSD Beta). These findings contribute to ongoing efforts to design partially automated driving systems that more effectively support meaningful human control and promote more appropriate use of automation.
对部分自动驾驶系统的有意义的人类控制:来自特斯拉用户采访的见解
部分自动驾驶系统的设计目的是执行特定的驾驶任务,如转向、加速和刹车,同时仍然需要人类驾驶员监控环境并在必要时进行干预。驾驶责任从人类驾驶员向自动驾驶系统的转变引发了对问责制的担忧,尤其是在涉及意外事件的情况下。为了解决这些问题,提出了有意义的人类控制(MHC)的概念。MHC强调人类对自动化系统做出的决定保持监督和责任的重要性。尽管MHC在驱动自动化方面有广泛的理论讨论,但关于现实世界部分自动化系统如何与MHC原则保持一致的实证研究有限。本研究提供了两个主要贡献:(1)基于103个对特斯拉Autopilot和Full Self-Driving (FSD) Beta系统用户的半结构化访谈,对部分自动驾驶中的MHC进行了实证评估;(2)通过定性访谈数据评估MHC的方法框架。我们使用一套评估标准来操作先前提出的MHC跟踪和追踪条件,以确定这些系统在实践中是否支持有意义的人类控制。我们的研究结果表明,有几个因素影响MHC的实现程度。跟踪失败——驾驶员对系统安全的期望没有得到充分满足——源于技术限制、对环境条件的易感性(例如,恶劣的天气或不充分的基础设施)以及技术性能和用户满意度之间的差异。追踪性能——明确分配责任的能力——受到对安全协议的不一致遵守、驾驶员信心水平的不同以及所使用的特定驾驶模式(例如,Autopilot与FSD Beta)的影响。这些发现有助于设计部分自动驾驶系统,更有效地支持有意义的人类控制,并促进更适当地使用自动化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.60
自引率
14.60%
发文量
239
审稿时长
71 days
期刊介绍: Transportation Research Part F: Traffic Psychology and Behaviour focuses on the behavioural and psychological aspects of traffic and transport. The aim of the journal is to enhance theory development, improve the quality of empirical studies and to stimulate the application of research findings in practice. TRF provides a focus and a means of communication for the considerable amount of research activities that are now being carried out in this field. The journal provides a forum for transportation researchers, psychologists, ergonomists, engineers and policy-makers with an interest in traffic and transport psychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信