Cherin Lim, David Prendez, Linda Ng Boyle, Prashanth Rajivan
{"title":"The Impact of Cybersecurity Attacks on Human Trust in Autonomous Vehicle Operations","authors":"Cherin Lim, David Prendez, Linda Ng Boyle, Prashanth Rajivan","doi":"10.1177/00187208241283321","DOIUrl":null,"url":null,"abstract":"ObjectiveThis study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior.BackgroundHuman trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust.MethodA driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the “Experience and Feedback” group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle’s vulnerability. Trust and foot positions were recorded for each drive.ResultsFindings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair.ConclusionTrust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design.ApplicationThe results of this study can be applied when building driver and situation-adaptive AI systems within AVs.","PeriodicalId":520013,"journal":{"name":"Human Factors: The Journal of the Human Factors and Ergonomics Society","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors: The Journal of the Human Factors and Ergonomics Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/00187208241283321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
ObjectiveThis study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior.BackgroundHuman trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust.MethodA driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the “Experience and Feedback” group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle’s vulnerability. Trust and foot positions were recorded for each drive.ResultsFindings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair.ConclusionTrust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design.ApplicationThe results of this study can be applied when building driver and situation-adaptive AI systems within AVs.