Xiao Yang;Gaolei Li;Kai Zhou;Jianhua Li;Xingqin Lin;Yuchen Liu
{"title":"Exploring Graph Neural Backdoors in Vehicular Networks: Fundamentals, Methodologies, Applications, and Future Perspectives","authors":"Xiao Yang;Gaolei Li;Kai Zhou;Jianhua Li;Xingqin Lin;Yuchen Liu","doi":"10.1109/OJVT.2025.3550411","DOIUrl":null,"url":null,"abstract":"Advances in Graph Neural Networks (GNNs) have substantially enhanced Vehicular Networks (VNs) across primary domains, encompassing traffic forecasting and management, route optimization and algorithmic planning, and cooperative driving. Despite the boosts of the GNN for VNs, recent research has empirically demonstrated its potential vulnerability to backdoor attacks, wherein adversaries integrate triggers into inputs to manipulate GNNs to generate adversary-premeditated malicious outputs (<italic>e.g.</i>, misclassification of vehicle actions or traffic signals). This susceptibility is attributable to adversarial manipulation attacks targeting the training process of GNN-based VN systems. Although there is a rapid increase in research on GNN backdoors, systematic surveys within this domain remain lacking. To bridge this gap, we present the first survey dedicated to GNN backdoors. We start with outlining the fundamental definition of GNNs, followed by the detailed summarization and categorization of current GNN backdoors and countermeasures based on their technical features and application scenarios. Subsequently, an analysis of the applicability paradigms of GNN backdoors is conducted, and prospective research trends are presented. Unlike prior surveys on vision-centric backdoors, we uniquely investigate GNN-oriented backdoor attacks in VNs, which aims to explore attack surfaces across spatiotemporal vehicular graphs and provide insights to security research.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"6 ","pages":"1051-1071"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10921674","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10921674/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Advances in Graph Neural Networks (GNNs) have substantially enhanced Vehicular Networks (VNs) across primary domains, encompassing traffic forecasting and management, route optimization and algorithmic planning, and cooperative driving. Despite the boosts of the GNN for VNs, recent research has empirically demonstrated its potential vulnerability to backdoor attacks, wherein adversaries integrate triggers into inputs to manipulate GNNs to generate adversary-premeditated malicious outputs (e.g., misclassification of vehicle actions or traffic signals). This susceptibility is attributable to adversarial manipulation attacks targeting the training process of GNN-based VN systems. Although there is a rapid increase in research on GNN backdoors, systematic surveys within this domain remain lacking. To bridge this gap, we present the first survey dedicated to GNN backdoors. We start with outlining the fundamental definition of GNNs, followed by the detailed summarization and categorization of current GNN backdoors and countermeasures based on their technical features and application scenarios. Subsequently, an analysis of the applicability paradigms of GNN backdoors is conducted, and prospective research trends are presented. Unlike prior surveys on vision-centric backdoors, we uniquely investigate GNN-oriented backdoor attacks in VNs, which aims to explore attack surfaces across spatiotemporal vehicular graphs and provide insights to security research.