{"title":"采用一种新的激光点云和视觉图像数据级融合重建算法","authors":"Lipu Zhou","doi":"10.1109/IVS.2013.6629655","DOIUrl":null,"url":null,"abstract":"Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Fusing laser point cloud and visual image at data level using a new reconstruction algorithm\",\"authors\":\"Lipu Zhou\",\"doi\":\"10.1109/IVS.2013.6629655\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.\",\"PeriodicalId\":251198,\"journal\":{\"name\":\"2013 IEEE Intelligent Vehicles Symposium (IV)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE Intelligent Vehicles Symposium (IV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IVS.2013.6629655\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2013.6629655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fusing laser point cloud and visual image at data level using a new reconstruction algorithm
Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.