Linqing Hu , Junqi Zhang , Jie Zhang , Shaoyin Cheng , Yuyi Wang , Weiming Zhang , Nenghai Yu
{"title":"Security analysis and adaptive false data injection against multi-sensor fusion localization for autonomous driving","authors":"Linqing Hu , Junqi Zhang , Jie Zhang , Shaoyin Cheng , Yuyi Wang , Weiming Zhang , Nenghai Yu","doi":"10.1016/j.inffus.2024.102822","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-sensor Fusion (MSF) algorithms are critical components in modern autonomous driving systems, particularly in localization and AI-powered perception modules, which play a vital role in ensuring vehicle safety. The Error-State Kalman Filter (ESKF), specifically employed for localization fusion, is widely recognized for its robustness and accuracy in MSF implementations. While existing studies have demonstrated the vulnerability of ESKF to sensor spoofing attacks, these works have primarily focused on a black-box implementation, leading to an insufficient security analysis. Specifically, due to the lack of theoretical guidance in previous methods, these studies have consistently relied on exponential functions to fit attack sequences across all scenarios. As a result, the attacker had to explore an extensive parameter space to identify effective attack sequences, lacking the ability to adaptively generate optimal ones. This paper aims to fill this crucial gap by conducting a thorough security analysis of the ESKF model and presenting a simple approach for modeling injection errors in these systems. By utilizing this error modeling, we introduce a new attack strategy that employs constrained optimization to reduce the energy needed to reach the same deviation target, guaranteeing that the attack is both efficient and effective. This method increases the stealthiness of the attack, making it harder to detect. Unlike previous methods, our approach can dynamically produce nearly perfect injection signals without requiring multiple attempts to find the best parameter combination in different scenarios. Through extensive simulations and real-world experiments, we demonstrate the superiority of our method compared to state-of-the-art attack strategies. Our results indicate that our approach requires significantly less injection energy to achieve the same deviation target. Additionally, we validate the practical applicability and impact of our method through end-to-end testing on an AI-powered autonomous driving system.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"117 ","pages":"Article 102822"},"PeriodicalIF":14.7000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524006006","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-sensor Fusion (MSF) algorithms are critical components in modern autonomous driving systems, particularly in localization and AI-powered perception modules, which play a vital role in ensuring vehicle safety. The Error-State Kalman Filter (ESKF), specifically employed for localization fusion, is widely recognized for its robustness and accuracy in MSF implementations. While existing studies have demonstrated the vulnerability of ESKF to sensor spoofing attacks, these works have primarily focused on a black-box implementation, leading to an insufficient security analysis. Specifically, due to the lack of theoretical guidance in previous methods, these studies have consistently relied on exponential functions to fit attack sequences across all scenarios. As a result, the attacker had to explore an extensive parameter space to identify effective attack sequences, lacking the ability to adaptively generate optimal ones. This paper aims to fill this crucial gap by conducting a thorough security analysis of the ESKF model and presenting a simple approach for modeling injection errors in these systems. By utilizing this error modeling, we introduce a new attack strategy that employs constrained optimization to reduce the energy needed to reach the same deviation target, guaranteeing that the attack is both efficient and effective. This method increases the stealthiness of the attack, making it harder to detect. Unlike previous methods, our approach can dynamically produce nearly perfect injection signals without requiring multiple attempts to find the best parameter combination in different scenarios. Through extensive simulations and real-world experiments, we demonstrate the superiority of our method compared to state-of-the-art attack strategies. Our results indicate that our approach requires significantly less injection energy to achieve the same deviation target. Additionally, we validate the practical applicability and impact of our method through end-to-end testing on an AI-powered autonomous driving system.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.