{"title":"Real-time image based rendering from uncalibrated images","authors":"Geert Willems, F. Verbiest, M. Vergauwen, L. Gool","doi":"10.1109/3DIM.2005.66","DOIUrl":null,"url":null,"abstract":"We present a novel real-time image-based rendering system for generating realistic novel views of complex scenes from a set of uncalibrated images. A combination of structure-and-motion and stereo techniques is used to obtain calibrated cameras and dense depth maps for all recorded images. These depth maps are converted into restrictive quadtrees, which allow for adaptive, view-dependent tessellations while storing per-vertex quality. When rendering a novel view, a subset of suitable cameras is selected based upon a ranking criterion. In the spirit of the unstructured lumigraph rendering approach a blending field is evaluated, although the implementation is adapted on several points. We alleviate the need for the creation of a geometric proxy for each novel view while the camera blending field is sampled in a more optimal, non-uniform way and combined with the per-vertex quality to reduce texture artifacts. In order to make real-time visualization possible, all critical steps of the visualization pipeline are programmed in a highly optimized way on commodity graphics hardware using the OpenGL Shading Language. The proposed system can handle complex scenes such as large outdoor scenes as well as small objects with a large number of acquired images.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DIM.2005.66","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
We present a novel real-time image-based rendering system for generating realistic novel views of complex scenes from a set of uncalibrated images. A combination of structure-and-motion and stereo techniques is used to obtain calibrated cameras and dense depth maps for all recorded images. These depth maps are converted into restrictive quadtrees, which allow for adaptive, view-dependent tessellations while storing per-vertex quality. When rendering a novel view, a subset of suitable cameras is selected based upon a ranking criterion. In the spirit of the unstructured lumigraph rendering approach a blending field is evaluated, although the implementation is adapted on several points. We alleviate the need for the creation of a geometric proxy for each novel view while the camera blending field is sampled in a more optimal, non-uniform way and combined with the per-vertex quality to reduce texture artifacts. In order to make real-time visualization possible, all critical steps of the visualization pipeline are programmed in a highly optimized way on commodity graphics hardware using the OpenGL Shading Language. The proposed system can handle complex scenes such as large outdoor scenes as well as small objects with a large number of acquired images.