UAV Tracking with Lidar as a Camera Sensor in GNSS-Denied Environments

Ha Sier, Xianjia Yu, Iacopo Catalano, J. P. Queralta, Zhuo Zou, Tomi Westerlund
{"title":"UAV Tracking with Lidar as a Camera Sensor in GNSS-Denied Environments","authors":"Ha Sier, Xianjia Yu, Iacopo Catalano, J. P. Queralta, Zhuo Zou, Tomi Westerlund","doi":"10.1109/ICL-GNSS57829.2023.10148919","DOIUrl":null,"url":null,"abstract":"Light detection and ranging (LiDAR) sensor has become one of the primary sensors in robotics and autonomous system for high-accuracy situational awareness. In recent years, multi-modal LiDAR systems emerged, and among them, LiDAR-as-a-camera sensors provide not only 3D point clouds but also fixed-resolution 360°panoramic images by encoding either depth, reflectivity, or near-infrared light in the image pixels. This potentially brings computer vision capabilities on top of the potential of LiDAR itself. In this paper, we are specifically interested in utilizing LiDARs and LiDAR-generated images for tracking Unmanned Aerial Vehicles (UAVs) in real-time which can benefit applications including docking, remote identification, or counter-UAV systems, among others. This is, to the best of our knowledge, the first work that explores the possibility of fusing the images and point cloud generated by a single LiDAR sensor to track a UAV without a priori known initialized position. We trained a custom YOLOv5 model for detecting UAVs based on the panoramic images collected in an indoor experiment arena with a motion capture (MOCAP) system. By integrating with the point cloud, we are able to continuously provide the position of the UAV. Our experiment demonstrated the effectiveness of the proposed UAV tracking approach compared with methods based only on point clouds or images. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.","PeriodicalId":414612,"journal":{"name":"2023 International Conference on Localization and GNSS (ICL-GNSS)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Localization and GNSS (ICL-GNSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICL-GNSS57829.2023.10148919","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Light detection and ranging (LiDAR) sensor has become one of the primary sensors in robotics and autonomous system for high-accuracy situational awareness. In recent years, multi-modal LiDAR systems emerged, and among them, LiDAR-as-a-camera sensors provide not only 3D point clouds but also fixed-resolution 360°panoramic images by encoding either depth, reflectivity, or near-infrared light in the image pixels. This potentially brings computer vision capabilities on top of the potential of LiDAR itself. In this paper, we are specifically interested in utilizing LiDARs and LiDAR-generated images for tracking Unmanned Aerial Vehicles (UAVs) in real-time which can benefit applications including docking, remote identification, or counter-UAV systems, among others. This is, to the best of our knowledge, the first work that explores the possibility of fusing the images and point cloud generated by a single LiDAR sensor to track a UAV without a priori known initialized position. We trained a custom YOLOv5 model for detecting UAVs based on the panoramic images collected in an indoor experiment arena with a motion capture (MOCAP) system. By integrating with the point cloud, we are able to continuously provide the position of the UAV. Our experiment demonstrated the effectiveness of the proposed UAV tracking approach compared with methods based only on point clouds or images. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.
gnss拒绝环境下激光雷达作为摄像传感器的无人机跟踪
光探测与测距(LiDAR)传感器已成为机器人和自主系统中高精度态势感知的主要传感器之一。近年来出现了多模态LiDAR系统,其中LiDAR as-a-camera传感器不仅可以提供3D点云,还可以通过在图像像素中编码深度、反射率或近红外光来提供固定分辨率的360°全景图像。这可能会使计算机视觉能力超越激光雷达本身的潜力。在本文中,我们特别感兴趣的是利用激光雷达和激光雷达生成的图像来实时跟踪无人机(uav),这可以使包括对接,远程识别或反无人机系统等应用受益。据我们所知,这是探索融合单个激光雷达传感器生成的图像和点云的可能性的第一项工作,以跟踪没有先验已知初始化位置的无人机。基于室内实验场地采集的全景图像,利用运动捕捉(MOCAP)系统训练了一个用于无人机检测的定制YOLOv5模型。通过与点云的融合,我们可以持续提供无人机的位置。与仅基于点云或图像的方法相比,我们的实验证明了所提出的无人机跟踪方法的有效性。此外,我们还在Nvidia Jetson Nano(一种流行的移动计算平台)上评估了我们的方法的实时性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信