First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework

Zilin Song, Shuolei Wang, Weikai Kong, Xiangjun Peng, Xu Sun
{"title":"First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework","authors":"Zilin Song, Shuolei Wang, Weikai Kong, Xiangjun Peng, Xu Sun","doi":"10.1145/3349263.3351497","DOIUrl":null,"url":null,"abstract":"Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.","PeriodicalId":237150,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings","volume":"265 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349263.3351497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.
首次尝试在OpenDS框架中使用视频到视频合成构建逼真的驾驶场景
现有的可编程模拟器使研究人员能够定制不同的驾驶场景来进行实验室汽车驾驶员模拟。然而,基于软件的认知研究模拟器在3D引擎的支持下生成和维护场景,这可能会在一定程度上影响用户的体验,因为它们不够真实。现在,一个关键的问题是如何将场景构建到现实世界中。在本文中,我们介绍了在开源驾驶模拟器软件openes框架中利用视频到视频合成(一种深度学习方法)的第一步,以尽可能逼真地呈现模拟场景。离线评估显示了我们的研究有希望的结果,我们未来的工作将集中在如何适当地合并它们,以建立一个接近现实的实时驾驶模拟器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信