Sadaf Farkhani, M. Kragh, P. Christiansen, R. Jørgensen, H. Karstoft
{"title":"精准农业中从稀疏到密集的深度完井","authors":"Sadaf Farkhani, M. Kragh, P. Christiansen, R. Jørgensen, H. Karstoft","doi":"10.1145/3387168.3387230","DOIUrl":null,"url":null,"abstract":"Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.","PeriodicalId":346739,"journal":{"name":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sparse-to-Dense Depth Completion in Precision Farming\",\"authors\":\"Sadaf Farkhani, M. Kragh, P. Christiansen, R. Jørgensen, H. Karstoft\",\"doi\":\"10.1145/3387168.3387230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.\",\"PeriodicalId\":346739,\"journal\":{\"name\":\"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing\",\"volume\":\"111 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3387168.3387230\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3387168.3387230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sparse-to-Dense Depth Completion in Precision Farming
Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.