Interpolation and simulation of autonomous driving camera data for vehicle position synchronization

Linguo Chai, Xiangyang Liu, W. Shangguan, Xu Li, B. Cai, Yue Cao
{"title":"Interpolation and simulation of autonomous driving camera data for vehicle position synchronization","authors":"Linguo Chai, Xiangyang Liu, W. Shangguan, Xu Li, B. Cai, Yue Cao","doi":"10.1109/CAC57257.2022.10055163","DOIUrl":null,"url":null,"abstract":"In order to meet the data requirements of the virtual simulation test of autonomous driving, we use the camera’s single-sample video data, interpolate frames to generate multiple camera simulation data. And then realize the simulation of extended front-end perception function at the data level. This paper proposes a video sampling frame simulation reconstruction mechanism based on sampling vehicle pose information in real scenes. Calculate the target position of the simulated vehicle according to the simulation requirements, and establish a simulation node on the sampling path. Taking the position difference between the simulated node and the real node as the offset, the DAIN algorithm is used to insert the image data into the target node. It is possible to realize simulation data generation of autonomous driving camera with variable vehicle speed/sampling frequency. Combined with the camera’s internal and external parameters, the coordinate system transformation of the annotation results is carried out to realize the inheritance of the annotation results of the simulation data. This paper combines the nuScenes open source database to test the authenticity of the synchronous simulation data results. The results show that the mean SSIM of the synchronous simulation image data and the real data is 0.71684, indicating that the simulation data has high authenticity. Based on yolov4, the perceptual function of the simulated data is verified. The average value of the SSIM of the simulated image data and the real data recognition frame is 0.984504795, and the perceptual recognition results are similar. The synchronous mapping result of the marked 3D-BOX box to the simulation data space is correct. Camera simulation data can well meet the needs of autonomous driving development and testing in perception and recognition. It can provide data support for autonomous driving simulation test.","PeriodicalId":287137,"journal":{"name":"2022 China Automation Congress (CAC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 China Automation Congress (CAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAC57257.2022.10055163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In order to meet the data requirements of the virtual simulation test of autonomous driving, we use the camera’s single-sample video data, interpolate frames to generate multiple camera simulation data. And then realize the simulation of extended front-end perception function at the data level. This paper proposes a video sampling frame simulation reconstruction mechanism based on sampling vehicle pose information in real scenes. Calculate the target position of the simulated vehicle according to the simulation requirements, and establish a simulation node on the sampling path. Taking the position difference between the simulated node and the real node as the offset, the DAIN algorithm is used to insert the image data into the target node. It is possible to realize simulation data generation of autonomous driving camera with variable vehicle speed/sampling frequency. Combined with the camera’s internal and external parameters, the coordinate system transformation of the annotation results is carried out to realize the inheritance of the annotation results of the simulation data. This paper combines the nuScenes open source database to test the authenticity of the synchronous simulation data results. The results show that the mean SSIM of the synchronous simulation image data and the real data is 0.71684, indicating that the simulation data has high authenticity. Based on yolov4, the perceptual function of the simulated data is verified. The average value of the SSIM of the simulated image data and the real data recognition frame is 0.984504795, and the perceptual recognition results are similar. The synchronous mapping result of the marked 3D-BOX box to the simulation data space is correct. Camera simulation data can well meet the needs of autonomous driving development and testing in perception and recognition. It can provide data support for autonomous driving simulation test.
自动驾驶相机数据的插值与仿真,用于车辆位置同步
为了满足自动驾驶虚拟仿真测试的数据需求,我们利用摄像机的单样本视频数据,插值帧生成多个摄像机仿真数据。然后在数据层面上实现扩展前端感知函数的仿真。提出了一种基于真实场景中车辆姿态信息采样的视频采样帧仿真重建机制。根据仿真要求计算仿真车辆的目标位置,并在采样路径上建立仿真节点。以模拟节点与真实节点的位置差为偏移量,采用DAIN算法将图像数据插入目标节点。实现变车速/变采样频率的自动驾驶摄像头仿真数据生成是可能的。结合摄像机内外参数,对标注结果进行坐标系变换,实现仿真数据标注结果的继承。本文结合nuScenes开源数据库对同步仿真数据结果的真实性进行了验证。结果表明,同步仿真图像数据与真实数据的平均SSIM为0.71684,表明仿真数据具有较高的真实性。基于yolov4对仿真数据的感知功能进行了验证。仿真图像数据与真实数据识别帧的SSIM平均值为0.984504795,感知识别结果相似。标记的3D-BOX盒到仿真数据空间的同步映射结果正确。摄像头仿真数据可以很好地满足自动驾驶开发和测试在感知和识别方面的需求。可以为自动驾驶仿真测试提供数据支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信