{"title":"Energy-Latency Tradeoff for Joint Optimization of Vehicle Selection and Resource Allocation in UAV-Assisted Vehicular Edge Computing","authors":"Chunlin Li;Jianyang Wu;Yong Zhang;Shaohua Wan","doi":"10.1109/TGCN.2024.3433457","DOIUrl":null,"url":null,"abstract":"In Unmanned Aerial Vehicle (UAV)-assisted Vehicular Edge Computing (VEC), Federated Learning (FL) offers a means to protect user privacy during the training of models using multiple vehicle datasets. However, involving numerous vehicles in the training process can lead to significant communication overhead, thereby increasing FL latency and energy consumption. To address this issue, we propose an energy-latency tradeoff scheme for the joint optimization of vehicle selection and resource allocation in UAV-assisted VEC. Our investigation focuses on maximizing long-term training rewards for vehicle selection and resource allocation in FL, while considering constraints such as UAV energy consumption, vehicular energy consumption, bandwidth, and vehicle mobility. This problem is formulated as a Mixed-Integer Nonlinear Programming (MINLP) problem and modeled as a Markov Decision Process (MDP). We proposed an algorithm based on AdamW and Butterfly Optimization Algorithm (BOA) for Double-Depth Q-networks (AB-DDQN) to determine the optimal decisions. To expedite algorithm convergence, we replace the stochastic gradient descent (SGD) algorithm with AdamW algorithm and employ BOA to select hyperparameters, enhancing algorithm performance. Experimental validation using the GTSDB dataset demonstrates that our algorithm effectively reduces latency and energy consumption in FL.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"445-458"},"PeriodicalIF":5.3000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10609426/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In Unmanned Aerial Vehicle (UAV)-assisted Vehicular Edge Computing (VEC), Federated Learning (FL) offers a means to protect user privacy during the training of models using multiple vehicle datasets. However, involving numerous vehicles in the training process can lead to significant communication overhead, thereby increasing FL latency and energy consumption. To address this issue, we propose an energy-latency tradeoff scheme for the joint optimization of vehicle selection and resource allocation in UAV-assisted VEC. Our investigation focuses on maximizing long-term training rewards for vehicle selection and resource allocation in FL, while considering constraints such as UAV energy consumption, vehicular energy consumption, bandwidth, and vehicle mobility. This problem is formulated as a Mixed-Integer Nonlinear Programming (MINLP) problem and modeled as a Markov Decision Process (MDP). We proposed an algorithm based on AdamW and Butterfly Optimization Algorithm (BOA) for Double-Depth Q-networks (AB-DDQN) to determine the optimal decisions. To expedite algorithm convergence, we replace the stochastic gradient descent (SGD) algorithm with AdamW algorithm and employ BOA to select hyperparameters, enhancing algorithm performance. Experimental validation using the GTSDB dataset demonstrates that our algorithm effectively reduces latency and energy consumption in FL.