A Novel Starlight-RGB Colorization Method Based on Image Pair Generation for Autonomous Driving

Ziqing Cheng, Jian Li, Xiaohui Yang, Zhenping Sun
{"title":"A Novel Starlight-RGB Colorization Method Based on Image Pair Generation for Autonomous Driving","authors":"Ziqing Cheng, Jian Li, Xiaohui Yang, Zhenping Sun","doi":"10.1109/CVCI51460.2020.9338603","DOIUrl":null,"url":null,"abstract":"It is a difficult challenge for humans to carry out environmental perception work at night and in low-light scenes. Depending on its extraordinary working performance in the dark, starlight camera is widely used in night driving assistance and various surveillance missions. However, the starlight camera images are lack of colorful information, which prevents users from understanding. This paper proposes a novel approach for colorizing starlight images using Generative Adversarial Network (GAN) architecture. The proposed method overcomes the time-space asynchronism of traditional heterogeneous data acquisition. We firstly introduce starlight-RGB image pairs generation. Inspired by 3D perspective transformation, we use LiDAR, camera and Inertial Measurement Unit(IMU) data to create generated visible images. We collect synchronous visible iamges, LiDAR points data and IMU data in the daytime and acquire LiDAR, starcam and IMU data at night. Such image pair generation method overcomes the difficulty of obtaining pairs of data and image pairs are aligned at pixel-level. As there are no reflection LiDAR points in the sky, the perspective projection images have no content in the sky areas. Based on supervised image-to-image translation GAN architecture, we use daytime RGB images as unpaired data, which is in order to restore the texture and color of the sky. We use KITTI dataset as validation, and get good experimental performance on our datasets.","PeriodicalId":119721,"journal":{"name":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVCI51460.2020.9338603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is a difficult challenge for humans to carry out environmental perception work at night and in low-light scenes. Depending on its extraordinary working performance in the dark, starlight camera is widely used in night driving assistance and various surveillance missions. However, the starlight camera images are lack of colorful information, which prevents users from understanding. This paper proposes a novel approach for colorizing starlight images using Generative Adversarial Network (GAN) architecture. The proposed method overcomes the time-space asynchronism of traditional heterogeneous data acquisition. We firstly introduce starlight-RGB image pairs generation. Inspired by 3D perspective transformation, we use LiDAR, camera and Inertial Measurement Unit(IMU) data to create generated visible images. We collect synchronous visible iamges, LiDAR points data and IMU data in the daytime and acquire LiDAR, starcam and IMU data at night. Such image pair generation method overcomes the difficulty of obtaining pairs of data and image pairs are aligned at pixel-level. As there are no reflection LiDAR points in the sky, the perspective projection images have no content in the sky areas. Based on supervised image-to-image translation GAN architecture, we use daytime RGB images as unpaired data, which is in order to restore the texture and color of the sky. We use KITTI dataset as validation, and get good experimental performance on our datasets.
基于图像对生成的自动驾驶星光- rgb着色新方法
在夜间和低光环境下进行环境感知工作对人类来说是一个艰巨的挑战。星光相机凭借其在黑暗中出色的工作性能,被广泛应用于夜间驾驶辅助和各种监视任务中。然而,星光相机图像缺乏丰富多彩的信息,阻碍了用户的理解。本文提出了一种利用生成对抗网络(GAN)架构对星光图像进行着色的新方法。该方法克服了传统异构数据采集的时空异步性。首先介绍了星光- rgb图像对的生成。受3D透视变换的启发,我们使用激光雷达,相机和惯性测量单元(IMU)数据来创建生成的可见图像。白天采集同步可见光图像、LiDAR点数据和IMU数据,夜晚采集LiDAR、starcam和IMU数据。这种图像对生成方法克服了获取数据对的困难,并且图像对在像素级对齐。由于天空中没有反射LiDAR点,因此透视投影图像在天空区域内没有内容。基于有监督的图像到图像转换GAN架构,我们使用白天RGB图像作为未配对数据,以恢复天空的纹理和颜色。我们使用KITTI数据集作为验证,在我们的数据集上获得了良好的实验性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信