FedDDPG:基于联邦学习的车辆轨迹预测强化学习方法

IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS
Array Pub Date : 2025-07-19 DOI:10.1016/j.array.2025.100450
Jinlong Li , Guojie Ma , Weihong Yang , Ruonan Li , Hongye Wang , Zhaoquan Gu
{"title":"FedDDPG:基于联邦学习的车辆轨迹预测强化学习方法","authors":"Jinlong Li ,&nbsp;Guojie Ma ,&nbsp;Weihong Yang ,&nbsp;Ruonan Li ,&nbsp;Hongye Wang ,&nbsp;Zhaoquan Gu","doi":"10.1016/j.array.2025.100450","DOIUrl":null,"url":null,"abstract":"<div><div>Vehicle Trajectory Prediction (VTP) plays of critical interest in Internet of Vehicles (IoV) as it greatly benefits motion planning and accident prevention for intelligent transportation. Despite its importance, VTP still faces substantial challenges, particularly in collecting distributed data and protecting trajectory privacy. Federated Learning (FL) emerges as a promising approach to address these problems. However, trajectory data collected from roadside units often contains varying levels of noise, which poses unique challenges for traditional FL methods. To address these challenges, this paper proposes a personalized optimization solution called FedDDPG (Federated Learning with Deep Deterministic Policy Gradient) for VTP with FL paradigm. Specifically, FedDDPG exploits the interactive and self-learning characteristics of reinforcement learning to generate optimized weights through agent-based learning during the FL process. By adapting highly noisy trajectory data, the FedDDPG effectively enhances the robustness and personalization of trajectory prediction. Experimental results demonstrate that our FedDDPG significantly improves prediction accuracy, convergence speed, and fairness for VTP under noisy conditions, while maintaining computational and communication overhead at a relatively low level. These findings highlight FedDDPG as a practical and efficient solution for privacy-preserving and distributed trajectory prediction in IoV applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100450"},"PeriodicalIF":4.5000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedDDPG: A reinforcement learning method for federated learning-based vehicle trajectory prediction\",\"authors\":\"Jinlong Li ,&nbsp;Guojie Ma ,&nbsp;Weihong Yang ,&nbsp;Ruonan Li ,&nbsp;Hongye Wang ,&nbsp;Zhaoquan Gu\",\"doi\":\"10.1016/j.array.2025.100450\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Vehicle Trajectory Prediction (VTP) plays of critical interest in Internet of Vehicles (IoV) as it greatly benefits motion planning and accident prevention for intelligent transportation. Despite its importance, VTP still faces substantial challenges, particularly in collecting distributed data and protecting trajectory privacy. Federated Learning (FL) emerges as a promising approach to address these problems. However, trajectory data collected from roadside units often contains varying levels of noise, which poses unique challenges for traditional FL methods. To address these challenges, this paper proposes a personalized optimization solution called FedDDPG (Federated Learning with Deep Deterministic Policy Gradient) for VTP with FL paradigm. Specifically, FedDDPG exploits the interactive and self-learning characteristics of reinforcement learning to generate optimized weights through agent-based learning during the FL process. By adapting highly noisy trajectory data, the FedDDPG effectively enhances the robustness and personalization of trajectory prediction. Experimental results demonstrate that our FedDDPG significantly improves prediction accuracy, convergence speed, and fairness for VTP under noisy conditions, while maintaining computational and communication overhead at a relatively low level. These findings highlight FedDDPG as a practical and efficient solution for privacy-preserving and distributed trajectory prediction in IoV applications.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"27 \",\"pages\":\"Article 100450\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005625000773\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000773","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

车辆轨迹预测(Vehicle Trajectory Prediction, VTP)对智能交通的运动规划和事故预防具有重要意义。尽管VTP很重要,但它仍然面临着巨大的挑战,特别是在收集分布式数据和保护轨迹隐私方面。联邦学习(FL)作为解决这些问题的一种很有前途的方法而出现。然而,从路边单元收集的轨迹数据通常包含不同程度的噪声,这对传统的FL方法提出了独特的挑战。为了解决这些挑战,本文提出了一种针对FL范式的VTP的个性化优化解决方案,称为FedDDPG(深度确定性策略梯度联邦学习)。具体来说,FedDDPG利用强化学习的交互性和自学习特性,在FL过程中通过基于agent的学习生成最优权值。通过适应高噪声的弹道数据,FedDDPG有效地增强了弹道预测的鲁棒性和个性化。实验结果表明,我们的FedDDPG显著提高了噪声条件下VTP的预测精度、收敛速度和公平性,同时将计算和通信开销保持在相对较低的水平。这些发现表明,FedDDPG是一种实用而有效的解决方案,可用于IoV应用中的隐私保护和分布式轨迹预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FedDDPG: A reinforcement learning method for federated learning-based vehicle trajectory prediction
Vehicle Trajectory Prediction (VTP) plays of critical interest in Internet of Vehicles (IoV) as it greatly benefits motion planning and accident prevention for intelligent transportation. Despite its importance, VTP still faces substantial challenges, particularly in collecting distributed data and protecting trajectory privacy. Federated Learning (FL) emerges as a promising approach to address these problems. However, trajectory data collected from roadside units often contains varying levels of noise, which poses unique challenges for traditional FL methods. To address these challenges, this paper proposes a personalized optimization solution called FedDDPG (Federated Learning with Deep Deterministic Policy Gradient) for VTP with FL paradigm. Specifically, FedDDPG exploits the interactive and self-learning characteristics of reinforcement learning to generate optimized weights through agent-based learning during the FL process. By adapting highly noisy trajectory data, the FedDDPG effectively enhances the robustness and personalization of trajectory prediction. Experimental results demonstrate that our FedDDPG significantly improves prediction accuracy, convergence speed, and fairness for VTP under noisy conditions, while maintaining computational and communication overhead at a relatively low level. These findings highlight FedDDPG as a practical and efficient solution for privacy-preserving and distributed trajectory prediction in IoV applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信