{"title":"FERO:基于高效深度强化学习的无人机边缘避障","authors":"Patrick McEnroe;Shen Wang;Madhusanka Liyanage","doi":"10.1109/OJCS.2025.3600916","DOIUrl":null,"url":null,"abstract":"With the expanding use of unmanned aerial vehicles (UAVs) across various fields, efficient obstacle avoidance has become increasingly crucial. This UAV obstacle avoidance can be achieved through deep reinforcement learning (DRL) algorithms deployed directly on-device (i.e., at the edge). However, practical deployment is constrained by high training time and high inference latency. In this paper, we propose methods to improve DRL-based UAV obstacle avoidance efficiency through improving both training efficiency and inference latency. To reduce inference latency, we employ input dimension reduction, streamlining the state representation to enable faster decision-making. For training time reduction, we leverage transfer learning, allowing the obstacle avoidance models to rapidly adapt to new environments without starting from scratch. To show the generalizability of our methods, we applied them to a discrete action space dueling double deep Q-network (D3QN) model and a continuous action space soft actor critic (SAC) model. Inference results are evaluated on both an NVIDIA Jetson Nano edge device and a NVIDIA Jetson Orin Nano edge device and we propose a combined method called FERO which combines state space reduction, transfer learning, and conversion to TensorRT for optimum deployment on NVIDIA Jetson devices. For our individual methods and combined method, we demonstrate reductions in training and inference times with minimal compromise in obstacle avoidance performance.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1378-1389"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11130910","citationCount":"0","resultStr":"{\"title\":\"FERO: Efficient Deep Reinforcement Learning based UAV Obstacle Avoidance at the Edge\",\"authors\":\"Patrick McEnroe;Shen Wang;Madhusanka Liyanage\",\"doi\":\"10.1109/OJCS.2025.3600916\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the expanding use of unmanned aerial vehicles (UAVs) across various fields, efficient obstacle avoidance has become increasingly crucial. This UAV obstacle avoidance can be achieved through deep reinforcement learning (DRL) algorithms deployed directly on-device (i.e., at the edge). However, practical deployment is constrained by high training time and high inference latency. In this paper, we propose methods to improve DRL-based UAV obstacle avoidance efficiency through improving both training efficiency and inference latency. To reduce inference latency, we employ input dimension reduction, streamlining the state representation to enable faster decision-making. For training time reduction, we leverage transfer learning, allowing the obstacle avoidance models to rapidly adapt to new environments without starting from scratch. To show the generalizability of our methods, we applied them to a discrete action space dueling double deep Q-network (D3QN) model and a continuous action space soft actor critic (SAC) model. Inference results are evaluated on both an NVIDIA Jetson Nano edge device and a NVIDIA Jetson Orin Nano edge device and we propose a combined method called FERO which combines state space reduction, transfer learning, and conversion to TensorRT for optimum deployment on NVIDIA Jetson devices. For our individual methods and combined method, we demonstrate reductions in training and inference times with minimal compromise in obstacle avoidance performance.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"6 \",\"pages\":\"1378-1389\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11130910\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11130910/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11130910/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
随着无人机在各个领域的广泛应用,高效避障变得越来越重要。这种无人机避障可以通过直接部署在设备上(即在边缘)的深度强化学习(DRL)算法来实现。然而,实际部署受到高训练时间和高推理延迟的限制。本文提出了通过提高训练效率和推理延迟来提高基于drl的无人机避障效率的方法。为了减少推理延迟,我们采用输入降维,简化状态表示以实现更快的决策。为了减少训练时间,我们利用迁移学习,使避障模型能够快速适应新环境,而无需从头开始。为了证明我们的方法的泛化性,我们将它们应用于离散动作空间决斗双深度q网络(D3QN)模型和连续动作空间软行为批评家(SAC)模型。在NVIDIA Jetson Nano edge设备和NVIDIA Jetson Orin Nano edge设备上对推理结果进行了评估,并提出了一种称为FERO的组合方法,该方法将状态空间约简、迁移学习和转换到TensorRT相结合,以便在NVIDIA Jetson设备上进行最佳部署。对于我们的单独方法和组合方法,我们证明了在最小程度上损害避障性能的情况下减少了训练和推理时间。
FERO: Efficient Deep Reinforcement Learning based UAV Obstacle Avoidance at the Edge
With the expanding use of unmanned aerial vehicles (UAVs) across various fields, efficient obstacle avoidance has become increasingly crucial. This UAV obstacle avoidance can be achieved through deep reinforcement learning (DRL) algorithms deployed directly on-device (i.e., at the edge). However, practical deployment is constrained by high training time and high inference latency. In this paper, we propose methods to improve DRL-based UAV obstacle avoidance efficiency through improving both training efficiency and inference latency. To reduce inference latency, we employ input dimension reduction, streamlining the state representation to enable faster decision-making. For training time reduction, we leverage transfer learning, allowing the obstacle avoidance models to rapidly adapt to new environments without starting from scratch. To show the generalizability of our methods, we applied them to a discrete action space dueling double deep Q-network (D3QN) model and a continuous action space soft actor critic (SAC) model. Inference results are evaluated on both an NVIDIA Jetson Nano edge device and a NVIDIA Jetson Orin Nano edge device and we propose a combined method called FERO which combines state space reduction, transfer learning, and conversion to TensorRT for optimum deployment on NVIDIA Jetson devices. For our individual methods and combined method, we demonstrate reductions in training and inference times with minimal compromise in obstacle avoidance performance.