Deep learning model to reconstruct 3D cityscapes by generating depth maps from omnidirectional images and its application to visual preference prediction

IF 1.8 Q3 ENGINEERING, MANUFACTURING
Design Science Pub Date : 2020-11-11 DOI:10.1017/dsj.2020.27
A. Takizawa, Hina Kinugawa
{"title":"Deep learning model to reconstruct 3D cityscapes by generating depth maps from omnidirectional images and its application to visual preference prediction","authors":"A. Takizawa, Hina Kinugawa","doi":"10.1017/dsj.2020.27","DOIUrl":null,"url":null,"abstract":"Abstract We developed a method to generate omnidirectional depth maps from corresponding omnidirectional images of cityscapes by learning each pair of an omnidirectional and a depth map, created by computer graphics, using pix2pix. Models trained with different series of images, shot under different site and sky conditions, were applied to street view images to generate depth maps. The validity of the generated depth maps was then evaluated quantitatively and visually. In addition, we conducted experiments to evaluate Google Street View images using multiple participants. We constructed a model that predicts the preference label of these images with and without the generated depth maps using the classification method with deep convolutional neural networks for general rectangular images and omnidirectional images. The results demonstrate the extent to which the generalization performance of the cityscape preference prediction model changes depending on the type of convolutional models and the presence or absence of generated depth maps.","PeriodicalId":54146,"journal":{"name":"Design Science","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2020-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dsj.2020.27","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Design Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/dsj.2020.27","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 3

Abstract

Abstract We developed a method to generate omnidirectional depth maps from corresponding omnidirectional images of cityscapes by learning each pair of an omnidirectional and a depth map, created by computer graphics, using pix2pix. Models trained with different series of images, shot under different site and sky conditions, were applied to street view images to generate depth maps. The validity of the generated depth maps was then evaluated quantitatively and visually. In addition, we conducted experiments to evaluate Google Street View images using multiple participants. We constructed a model that predicts the preference label of these images with and without the generated depth maps using the classification method with deep convolutional neural networks for general rectangular images and omnidirectional images. The results demonstrate the extent to which the generalization performance of the cityscape preference prediction model changes depending on the type of convolutional models and the presence or absence of generated depth maps.
利用全向图像生成深度图重建三维城市景观的深度学习模型及其在视觉偏好预测中的应用
摘要我们开发了一种方法,通过使用pix2pix学习由计算机图形创建的每一对全向和深度图,从城市景观的相应全向图像中生成全向深度图。将在不同场地和天空条件下拍摄的不同系列图像训练的模型应用于街景图像,以生成深度图。然后对生成的深度图的有效性进行定量和可视化评估。此外,我们还使用多个参与者进行了评估谷歌街景图像的实验。我们使用深度卷积神经网络的分类方法,针对一般矩形图像和全向图像,构建了一个模型,预测这些图像的偏好标签,无论是否生成深度图。结果证明了城市景观偏好预测模型的泛化性能根据卷积模型的类型和生成的深度图的存在与否而变化的程度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Design Science
Design Science ENGINEERING, MANUFACTURING-
CiteScore
4.80
自引率
12.50%
发文量
19
审稿时长
22 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信