{"title":"Fusion of stereo and Lidar data for dense depth map computation","authors":"Hugo Courtois, N. Aouf","doi":"10.1109/RED-UAS.2017.8101664","DOIUrl":null,"url":null,"abstract":"Creating a map is a necessity in a lot of robotic applications, and depth maps are a way to estimate the position of other objects or obstacles. In this paper, an algorithm to compute depth maps is proposed. It operates by fusing information from two types of sensor: a stereo camera, and a LIDAR scanner. The strategy is to estimate reliably the disparities of a sparse set of points, then a bilateral filter is used to interpolate the missing disparities. Finally, the interpolation is refined. Our method is tested on the KITTI dataset and is compared against several other methods which fuse those modalities, or are extended to perform this fusion. Those tests show that our method is competitive with other fusion methods.","PeriodicalId":299104,"journal":{"name":"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RED-UAS.2017.8101664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Creating a map is a necessity in a lot of robotic applications, and depth maps are a way to estimate the position of other objects or obstacles. In this paper, an algorithm to compute depth maps is proposed. It operates by fusing information from two types of sensor: a stereo camera, and a LIDAR scanner. The strategy is to estimate reliably the disparities of a sparse set of points, then a bilateral filter is used to interpolate the missing disparities. Finally, the interpolation is refined. Our method is tested on the KITTI dataset and is compared against several other methods which fuse those modalities, or are extended to perform this fusion. Those tests show that our method is competitive with other fusion methods.