基于多传感器融合的非公路可行驶区域检测及其ROS实现

Palmani Duraisamy, S. Natarajan
{"title":"基于多传感器融合的非公路可行驶区域检测及其ROS实现","authors":"Palmani Duraisamy, S. Natarajan","doi":"10.1109/WiSPNET57748.2023.10134440","DOIUrl":null,"url":null,"abstract":"There is a growing demand for multi-sensor fusion based off-road drivable region detection in the field of autonomous vehicles and robotics. This technology allows for improved navigation and localization in off-road environments, such as rough terrain, by combining data from multiple sensors. This can lead to more accurate and reliable detection of drivable regions, which is crucial for the safe operation of autonomous vehicles in off-road environments. In this work, a deep learning architecture is employed to identify drivable and obstacle regions on images. It learns to classify and cluster the regions simultaneously using semantic segmentation. Further, a LiDAR-based ground segmentation method is introduced to classify drivable regions more effectively. The ground segmentation method splits the regions into small bins and applies the ground fitting technique with adaptive likelihood estimation. Finally, a late fusion method is proposed to fuse both results better to classify the drivable region. The entire fusion architecture was implemented on ROS. On the RELLIS3D dataset, the semantic segmentation achieves a mean accuracy of 84.3%. Furthermore, it is observed that certain regions misclassified by the semantic segmentation are corrected by LiDAR-based ground segmentation and the fusion provides a better representation of the drivable region.","PeriodicalId":150576,"journal":{"name":"2023 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Sensor Fusion Based Off-Road Drivable Region Detection and Its ROS Implementation\",\"authors\":\"Palmani Duraisamy, S. Natarajan\",\"doi\":\"10.1109/WiSPNET57748.2023.10134440\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a growing demand for multi-sensor fusion based off-road drivable region detection in the field of autonomous vehicles and robotics. This technology allows for improved navigation and localization in off-road environments, such as rough terrain, by combining data from multiple sensors. This can lead to more accurate and reliable detection of drivable regions, which is crucial for the safe operation of autonomous vehicles in off-road environments. In this work, a deep learning architecture is employed to identify drivable and obstacle regions on images. It learns to classify and cluster the regions simultaneously using semantic segmentation. Further, a LiDAR-based ground segmentation method is introduced to classify drivable regions more effectively. The ground segmentation method splits the regions into small bins and applies the ground fitting technique with adaptive likelihood estimation. Finally, a late fusion method is proposed to fuse both results better to classify the drivable region. The entire fusion architecture was implemented on ROS. On the RELLIS3D dataset, the semantic segmentation achieves a mean accuracy of 84.3%. Furthermore, it is observed that certain regions misclassified by the semantic segmentation are corrected by LiDAR-based ground segmentation and the fusion provides a better representation of the drivable region.\",\"PeriodicalId\":150576,\"journal\":{\"name\":\"2023 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET)\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WiSPNET57748.2023.10134440\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WiSPNET57748.2023.10134440","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在自动驾驶汽车和机器人领域,对基于多传感器融合的非公路可驾驶区域检测的需求日益增长。该技术通过结合来自多个传感器的数据,可以改善越野环境(如崎岖地形)的导航和定位。这可以更准确、更可靠地检测可驾驶区域,这对于自动驾驶汽车在越野环境下的安全运行至关重要。在这项工作中,采用深度学习架构来识别图像上的可驾驶区域和障碍区域。它学习使用语义分割同时对区域进行分类和聚类。在此基础上,提出了一种基于激光雷达的地面分割方法,对可驾驶区域进行更有效的分类。地面分割方法采用自适应似然估计的地面拟合技术,将区域分割成小盒。最后,提出了一种后期融合方法,将两种结果更好地融合在一起,对可驱动区域进行分类。整个融合架构是在ROS上实现的。在RELLIS3D数据集上,语义分割的平均准确率达到84.3%。此外,我们还观察到,基于lidar的地面分割可以纠正语义分割错误分类的某些区域,并且融合提供了更好的可驾驶区域表示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-Sensor Fusion Based Off-Road Drivable Region Detection and Its ROS Implementation
There is a growing demand for multi-sensor fusion based off-road drivable region detection in the field of autonomous vehicles and robotics. This technology allows for improved navigation and localization in off-road environments, such as rough terrain, by combining data from multiple sensors. This can lead to more accurate and reliable detection of drivable regions, which is crucial for the safe operation of autonomous vehicles in off-road environments. In this work, a deep learning architecture is employed to identify drivable and obstacle regions on images. It learns to classify and cluster the regions simultaneously using semantic segmentation. Further, a LiDAR-based ground segmentation method is introduced to classify drivable regions more effectively. The ground segmentation method splits the regions into small bins and applies the ground fitting technique with adaptive likelihood estimation. Finally, a late fusion method is proposed to fuse both results better to classify the drivable region. The entire fusion architecture was implemented on ROS. On the RELLIS3D dataset, the semantic segmentation achieves a mean accuracy of 84.3%. Furthermore, it is observed that certain regions misclassified by the semantic segmentation are corrected by LiDAR-based ground segmentation and the fusion provides a better representation of the drivable region.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信