Yeong-Hu Jeong, Haejoon Kim, H. Seo, Frédéric Cordier, Seungyong Lee
{"title":"RGB-D IBR: rendering indoor scenes using sparse RGB-D images with local alignments","authors":"Yeong-Hu Jeong, Haejoon Kim, H. Seo, Frédéric Cordier, Seungyong Lee","doi":"10.1145/2856400.2876006","DOIUrl":null,"url":null,"abstract":"This paper presents an image-based rendering (IBR) system based on RGB-D images. The input of our system consists of RGB-D images captured at sparse locations in the scene and can be expanded by adding new RGB-D images. The sparsity of RGB-D images increases the usability of our system as the user need not capture a RGB-D image stream in a single shot, which may require careful planning for a hand-held camera. Our system begins with a single RGB-D image and images are incrementally added one by one. For each newly added image, a batch process is performed to align it with previously added images. The process does not include a global alignment step, such as bundle adjustment, and can be completed quickly by computing only local alignments of RGB-D images. Aligned images are represented as a graph, where each node is an input image and an edge contains relative pose information between nodes. A novel view image is rendered by picking the nearest input as the reference image and then blending the neighboring images based on depth information in real time. Experimental results with indoor scenes using Microsoft Kinect demonstrate that our system can synthesize high quality novel view images from a sparse set of RGB-D images.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2856400.2876006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an image-based rendering (IBR) system based on RGB-D images. The input of our system consists of RGB-D images captured at sparse locations in the scene and can be expanded by adding new RGB-D images. The sparsity of RGB-D images increases the usability of our system as the user need not capture a RGB-D image stream in a single shot, which may require careful planning for a hand-held camera. Our system begins with a single RGB-D image and images are incrementally added one by one. For each newly added image, a batch process is performed to align it with previously added images. The process does not include a global alignment step, such as bundle adjustment, and can be completed quickly by computing only local alignments of RGB-D images. Aligned images are represented as a graph, where each node is an input image and an edge contains relative pose information between nodes. A novel view image is rendered by picking the nearest input as the reference image and then blending the neighboring images based on depth information in real time. Experimental results with indoor scenes using Microsoft Kinect demonstrate that our system can synthesize high quality novel view images from a sparse set of RGB-D images.