Jinlong Li , Guojie Ma , Weihong Yang , Ruonan Li , Hongye Wang , Zhaoquan Gu
{"title":"FedDDPG: A reinforcement learning method for federated learning-based vehicle trajectory prediction","authors":"Jinlong Li , Guojie Ma , Weihong Yang , Ruonan Li , Hongye Wang , Zhaoquan Gu","doi":"10.1016/j.array.2025.100450","DOIUrl":null,"url":null,"abstract":"<div><div>Vehicle Trajectory Prediction (VTP) plays of critical interest in Internet of Vehicles (IoV) as it greatly benefits motion planning and accident prevention for intelligent transportation. Despite its importance, VTP still faces substantial challenges, particularly in collecting distributed data and protecting trajectory privacy. Federated Learning (FL) emerges as a promising approach to address these problems. However, trajectory data collected from roadside units often contains varying levels of noise, which poses unique challenges for traditional FL methods. To address these challenges, this paper proposes a personalized optimization solution called FedDDPG (Federated Learning with Deep Deterministic Policy Gradient) for VTP with FL paradigm. Specifically, FedDDPG exploits the interactive and self-learning characteristics of reinforcement learning to generate optimized weights through agent-based learning during the FL process. By adapting highly noisy trajectory data, the FedDDPG effectively enhances the robustness and personalization of trajectory prediction. Experimental results demonstrate that our FedDDPG significantly improves prediction accuracy, convergence speed, and fairness for VTP under noisy conditions, while maintaining computational and communication overhead at a relatively low level. These findings highlight FedDDPG as a practical and efficient solution for privacy-preserving and distributed trajectory prediction in IoV applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100450"},"PeriodicalIF":4.5000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000773","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Vehicle Trajectory Prediction (VTP) plays of critical interest in Internet of Vehicles (IoV) as it greatly benefits motion planning and accident prevention for intelligent transportation. Despite its importance, VTP still faces substantial challenges, particularly in collecting distributed data and protecting trajectory privacy. Federated Learning (FL) emerges as a promising approach to address these problems. However, trajectory data collected from roadside units often contains varying levels of noise, which poses unique challenges for traditional FL methods. To address these challenges, this paper proposes a personalized optimization solution called FedDDPG (Federated Learning with Deep Deterministic Policy Gradient) for VTP with FL paradigm. Specifically, FedDDPG exploits the interactive and self-learning characteristics of reinforcement learning to generate optimized weights through agent-based learning during the FL process. By adapting highly noisy trajectory data, the FedDDPG effectively enhances the robustness and personalization of trajectory prediction. Experimental results demonstrate that our FedDDPG significantly improves prediction accuracy, convergence speed, and fairness for VTP under noisy conditions, while maintaining computational and communication overhead at a relatively low level. These findings highlight FedDDPG as a practical and efficient solution for privacy-preserving and distributed trajectory prediction in IoV applications.