{"title":"Neural Network Pruning and Fast Training for DRL-based UAV Trajectory Planning","authors":"Yilan Li, Haowen Fang, Mingyang Li, Yue Ma, Qinru Qiu","doi":"10.1109/asp-dac52403.2022.9712561","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) has been applied for optimal control of autonomous UAV trajectory generation. The energy and payload capacity of small UAVs impose constraints on the complexity and size of the neural network. While Model compression has the potential to optimize the trained neural network model for efficient deployment on em-bedded platforms, pruning a neural network for DRL is more difficult due to the slow convergence in the training before and after pruning. In this work, we focus on improving the speed of DRL training and pruning. New reward function and action exploration are first introduced, resulting in convergence speedup by 34.14%. The framework that integrates pruning and DRL training is then presented with an emphasize on how to reduce the training cost. The pruning does not only improve computational performance of inference, but also reduces the training effort with-out compromising the quality of the trajectory. Finally, experimental results are presented. We show that the integrated training and pruning framework reduces 67.16% of the weight and improves trajectory success rate by 1.7%. It achieves a 4.43x reduction of the floating-point operations for the inference, resulting a measured 41.85% run time reduction.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/asp-dac52403.2022.9712561","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Deep reinforcement learning (DRL) has been applied for optimal control of autonomous UAV trajectory generation. The energy and payload capacity of small UAVs impose constraints on the complexity and size of the neural network. While Model compression has the potential to optimize the trained neural network model for efficient deployment on em-bedded platforms, pruning a neural network for DRL is more difficult due to the slow convergence in the training before and after pruning. In this work, we focus on improving the speed of DRL training and pruning. New reward function and action exploration are first introduced, resulting in convergence speedup by 34.14%. The framework that integrates pruning and DRL training is then presented with an emphasize on how to reduce the training cost. The pruning does not only improve computational performance of inference, but also reduces the training effort with-out compromising the quality of the trajectory. Finally, experimental results are presented. We show that the integrated training and pruning framework reduces 67.16% of the weight and improves trajectory success rate by 1.7%. It achieves a 4.43x reduction of the floating-point operations for the inference, resulting a measured 41.85% run time reduction.