{"title":"A Novel Starlight-RGB Colorization Method Based on Image Pair Generation for Autonomous Driving","authors":"Ziqing Cheng, Jian Li, Xiaohui Yang, Zhenping Sun","doi":"10.1109/CVCI51460.2020.9338603","DOIUrl":null,"url":null,"abstract":"It is a difficult challenge for humans to carry out environmental perception work at night and in low-light scenes. Depending on its extraordinary working performance in the dark, starlight camera is widely used in night driving assistance and various surveillance missions. However, the starlight camera images are lack of colorful information, which prevents users from understanding. This paper proposes a novel approach for colorizing starlight images using Generative Adversarial Network (GAN) architecture. The proposed method overcomes the time-space asynchronism of traditional heterogeneous data acquisition. We firstly introduce starlight-RGB image pairs generation. Inspired by 3D perspective transformation, we use LiDAR, camera and Inertial Measurement Unit(IMU) data to create generated visible images. We collect synchronous visible iamges, LiDAR points data and IMU data in the daytime and acquire LiDAR, starcam and IMU data at night. Such image pair generation method overcomes the difficulty of obtaining pairs of data and image pairs are aligned at pixel-level. As there are no reflection LiDAR points in the sky, the perspective projection images have no content in the sky areas. Based on supervised image-to-image translation GAN architecture, we use daytime RGB images as unpaired data, which is in order to restore the texture and color of the sky. We use KITTI dataset as validation, and get good experimental performance on our datasets.","PeriodicalId":119721,"journal":{"name":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVCI51460.2020.9338603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
It is a difficult challenge for humans to carry out environmental perception work at night and in low-light scenes. Depending on its extraordinary working performance in the dark, starlight camera is widely used in night driving assistance and various surveillance missions. However, the starlight camera images are lack of colorful information, which prevents users from understanding. This paper proposes a novel approach for colorizing starlight images using Generative Adversarial Network (GAN) architecture. The proposed method overcomes the time-space asynchronism of traditional heterogeneous data acquisition. We firstly introduce starlight-RGB image pairs generation. Inspired by 3D perspective transformation, we use LiDAR, camera and Inertial Measurement Unit(IMU) data to create generated visible images. We collect synchronous visible iamges, LiDAR points data and IMU data in the daytime and acquire LiDAR, starcam and IMU data at night. Such image pair generation method overcomes the difficulty of obtaining pairs of data and image pairs are aligned at pixel-level. As there are no reflection LiDAR points in the sky, the perspective projection images have no content in the sky areas. Based on supervised image-to-image translation GAN architecture, we use daytime RGB images as unpaired data, which is in order to restore the texture and color of the sky. We use KITTI dataset as validation, and get good experimental performance on our datasets.