{"title":"Drivers and Passengers Maybe the Weakest Link in the CAV Data Privacy Defenses","authors":"Aiping Xiong, Z. Cai, Tianhao Wang","doi":"10.14722/autosec.2022.23024","DOIUrl":"https://doi.org/10.14722/autosec.2022.23024","url":null,"abstract":"—Individuals’ interactions with connected autonomous vehicles (CAVs) involve sharing various data in a ubiquitous manner, raising novel challenges for privacy. The human factors of privacy must first be understood to promote consumers’ acceptance of CAVs. To inform the privacy research in the context of CAVs, we discuss how the emerging technologies development of CAV poses new privacy challenges for drivers and passengers. We argue that the privacy design of CAVs should adopt a user-centered approach, which integrates human factors into the development and deployment of privacy-enhancing technologies, such as differential privacy.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131955551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demo: I Am Not Afraid of the GPS Jammer: Exploiting Cellular Signals for Accurate Ground Vehicle Navigation in a GPS-Denied Environment","authors":"Ali A. Abdallah, Zaher M. Kassas, Chiawei Lee","doi":"10.14722/autosec.2022.23049","DOIUrl":"https://doi.org/10.14722/autosec.2022.23049","url":null,"abstract":"T HIS demo presents unprecedented attack-defense results of a ground vehicle navigating to a meter-level accuracy in defense mechanism exploited signals from eight cellular long-term evolution (LTE) towers, whose positions were mapped prior to the experiment, from the U.S. cellular providers T-Mobile and Verizon, one of which was more than 52 km away from the ground vehicle. These signals were processed by the author’s software-defined radio (SDR) to produce pseudorange measurements, which were fused through an extended Kalman filter to estimate the vehicle’s trajectory. The defense mechanism achieved a position RMSE of 2.6 m exclusively with cellular LTE signals and no other sensors. The results are summarized in Fig. 2. Note that to obtain the vehicle’s ground truth trajectory, a vehicle-mounted GNSS-IMU system was used, which utilized signals from the non-jammed GNSS constellations (Galileo and GLONASS). It is worth noting that the unprecedented 2.6 position RMSE achieved in this demo are an order of magnitude smaller than previously published results in the same environment, which achieved a position RMSE of 29.4 m. Further details can be found in the video.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126168081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demo: Recovering Autonomous Robotic Vehicles from Physical Attacks","authors":"Pritam Dash, K. Pattabiraman","doi":"10.14722/autosec.2022.23009","DOIUrl":"https://doi.org/10.14722/autosec.2022.23009","url":null,"abstract":"—Robotic Vehicles (RV) rely extensively on sensor inputs to operate autonomously. Physical attacks such as sensor tampering and spoofing feed erroneous sensor measurements to deviate RVs from their course and result in mission failures. We present PID-Piper , a novel framework for automatically recovering RVs from physical attacks. We use machine learning (ML) to design an attack resilient FeedForward Controller (FFC), which runs in tandem with the RV’s primary controller and monitors it. Under attacks, the FFC takes over from the RV’s primary controller to recover the RV, and allows the RV to complete its mission successfully. Our evaluation on 6 RV systems including 3 real RVs shows that PID-Piper allows RVs to complete their missions successfully despite attacks in 83% of the cases.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"03 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130974694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khaled Serag, Vireshwar Kumar, Z. B. Celik, R. Bhatia, Mathias Payer, Dongyan Xu
{"title":"Demo: Attacks on CAN Error Handling Mechanism","authors":"Khaled Serag, Vireshwar Kumar, Z. B. Celik, R. Bhatia, Mathias Payer, Dongyan Xu","doi":"10.14722/autosec.2022.23013","DOIUrl":"https://doi.org/10.14722/autosec.2022.23013","url":null,"abstract":"—This demo shows how vulnerable CAN’s error handling mechanism is by presenting three recent attacks that take advantage of this mechanism.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131078307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Zhu, Chenglin Miao, Foad Hajiaghajani, Mengdi Huai, Lu Su, Chunming Qiao
{"title":"Demo: Attacking LiDAR Semantic Segmentation in Autonomous Driving","authors":"Yi Zhu, Chenglin Miao, Foad Hajiaghajani, Mengdi Huai, Lu Su, Chunming Qiao","doi":"10.14722/autosec.2022.23022","DOIUrl":"https://doi.org/10.14722/autosec.2022.23022","url":null,"abstract":"—As a fundamental task in autonomous driving, LiDAR semantic segmentation aims to provide semantic understanding of the driving environment. We demonstrate that existing LiDAR semantic segmentation models in autonomous driving systems can be easily fooled by placing some simple objects on the road, such as cardboard and traffic signs. We show that this type of attack can hide a vehicle and change the road surface to road-side vegetation.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a TEE-based V2V Protocol for Connected and Autonomous Vehicles","authors":"Mohit Kumar Jangid, Zhiqiang Lin","doi":"10.14722/autosec.2022.23044","DOIUrl":"https://doi.org/10.14722/autosec.2022.23044","url":null,"abstract":"—Being safer, cleaner, and more efficient, connected and autonomous vehicles (CAVs) are expected to be the dominant vehicles of future transportation systems. However, there are enormous security and privacy challenges while also considering the efficiency and and scalability. One key challenge is how to effi- ciently authenticate a vehicle in the ad-hoc CAV network and ensure its tamper-resistance, accountability, and non-repudiation. In this paper, we present the design and implementation of Vehicle-to-Vehicle (V2V) protocol by leveraging trusted execution envi- ronment (TEE), and show how this TEE-based protocol achieves the objective of authentication, privacy, accountability and revo- cation as well as the scalability and efficiency. We hope that our TEE-based V2V protocol can inspire further research into CAV security and privacy, particularly how to leverage TEE to solve some of the hard problems and make CAV closer to practice.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demo: Disclosing the Pringles Syndrome in Tesla FSD Vehicles","authors":"Zhisheng Hu, Shengjian Guo, Kang Li","doi":"10.14722/autosec.2022.23019","DOIUrl":"https://doi.org/10.14722/autosec.2022.23019","url":null,"abstract":"—In this demo, we disclose a potential bug in the Tesla Full Self-Driving (FSD) software. A vulnerable FSD vehicle can be deterministically tricked to run a red light. Attackers can cause a victim vehicle to behave in such ways without tampering or interfering with any sensors or physically accessing the vehicle. We infer that such behavior is caused by Tesla FSD’s decision system failing to take latest perception signals once it enters a specific mode. We call such problematic behavior Pringles Syndrome . Our study on multiple other autonomous driving implementations shows that this failed state update is a common failure pattern that specially needs attentions in autonomous driving software tests and developments.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}