Anirban Roy, Sujeong Kim, M. Yin, Eric Yeh, Takuma Nakabayashi, M. Campbell, Ian Keough, Yoshito Tsuji
{"title":"A learning-based framework for generating 3D building models from 2D images","authors":"Anirban Roy, Sujeong Kim, M. Yin, Eric Yeh, Takuma Nakabayashi, M. Campbell, Ian Keough, Yoshito Tsuji","doi":"10.1109/DESTION56136.2022.00014","DOIUrl":null,"url":null,"abstract":"Our goal is to develop a tool to assist architects in generating 3D models of buildings. Unlike the existing manual computer-aided design (CAD) tools that require a significant amount of time and expertise to create 3D models, this tool enables architects to efficiently generate such models. In order to develop this tool, we propose a learning-based framework that enables generating 3D models of buildings from 2D images. Given an arbitrary image of a building, we generate a 3D model that architects can easily modify to produce the final model. We consider a parametric representation of 3D building models to facilitate accurate rendering and editing of the models. Our framework consists of two main components: 1) a facade detection and frontalizer module that detects the primary facade of a building and removes camera projection to generate a frontal view of the facade, and 3) a 2D to 3D conversion module that estimated the 3D parameters of the facade in order to generate a 3D model of the facade. We consider a simulation tool to generate 3D building models and use these as training samples to train our model. These simulated samples significantly reduce the amount of expensive human-annotated samples as this task requires expert architects annotating building images. To evaluate our approach, we test on real building images that are annotated by expert architects.","PeriodicalId":273969,"journal":{"name":"2022 IEEE Workshop on Design Automation for CPS and IoT (DESTION)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Workshop on Design Automation for CPS and IoT (DESTION)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DESTION56136.2022.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Our goal is to develop a tool to assist architects in generating 3D models of buildings. Unlike the existing manual computer-aided design (CAD) tools that require a significant amount of time and expertise to create 3D models, this tool enables architects to efficiently generate such models. In order to develop this tool, we propose a learning-based framework that enables generating 3D models of buildings from 2D images. Given an arbitrary image of a building, we generate a 3D model that architects can easily modify to produce the final model. We consider a parametric representation of 3D building models to facilitate accurate rendering and editing of the models. Our framework consists of two main components: 1) a facade detection and frontalizer module that detects the primary facade of a building and removes camera projection to generate a frontal view of the facade, and 3) a 2D to 3D conversion module that estimated the 3D parameters of the facade in order to generate a 3D model of the facade. We consider a simulation tool to generate 3D building models and use these as training samples to train our model. These simulated samples significantly reduce the amount of expensive human-annotated samples as this task requires expert architects annotating building images. To evaluate our approach, we test on real building images that are annotated by expert architects.