Sparse-to-Dense Depth Completion in Precision Farming

Sadaf Farkhani, M. Kragh, P. Christiansen, R. Jørgensen, H. Karstoft
{"title":"Sparse-to-Dense Depth Completion in Precision Farming","authors":"Sadaf Farkhani, M. Kragh, P. Christiansen, R. Jørgensen, H. Karstoft","doi":"10.1145/3387168.3387230","DOIUrl":null,"url":null,"abstract":"Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.","PeriodicalId":346739,"journal":{"name":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3387168.3387230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.
精准农业中从稀疏到密集的深度完井
如果在密集深度图的引导下,农业领域的自动驾驶可以更轻松、更安全,因为密集深度图勾勒出场景的几何形状。RGB单眼图像只有简单的深度信息,LiDAR虽然有准确的深度信息,但只能提供稀疏的深度图。通过将稀疏激光雷达与对齐的彩色图像插值,可以创建可靠的密集深度图。在本文中,我们应用了一个深度回归模型,其中RGB单眼图像用于稀疏到密集的激光雷达深度图补全。我们的模型是基于U-Net架构提出的[9]。然而,在Fieldsafe数据集(一个多模态农业数据集)上训练模型会导致过拟合。因此,我们在具有高图像多样性的Kitti数据集上训练模型,并在Fieldsafe上进行测试。我们制作了一个错误图来分析模型在Fieldsafe数据集中对近距离或远距离对象的性能。误差图显示了真实深度与预测深度之间的绝对差值。在Fieldsafe中,该模型对近距离物体的预测效果比远距离物体的预测效果好63.6%。然而,在Kitti数据集中,该模型在远目标上的表现比近目标好10.96%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信