Jiajun Zhu, G. Humphreys, David Koller, Skip Steuart, Rui Wang
{"title":"Fast Omnidirectional 3D Scene Acquisition with an Array of Stereo Cameras","authors":"Jiajun Zhu, G. Humphreys, David Koller, Skip Steuart, Rui Wang","doi":"10.1109/3DIM.2007.25","DOIUrl":"https://doi.org/10.1109/3DIM.2007.25","url":null,"abstract":"We present an omnidirectional 3D acquisition system based on a mobile array of high-resolution consumer digital SLR cameras that automatically capture high dynamic range stereo pairs across a full 360-degree panorama. The stereo pairs are augmented with a time-varying lighting pattern created using standard photographic flashes, lenses, and patterned slides. Spacetime stereo techniques are used to generate 3D range images with corresponding color data from the HDR photographs. The multiple range images are aligned with egomotion estimation and ICP registration techniques, and volumetric merging and color texturing algorithms allow the rapid creation of complete 3D models. The resulting system compares favorably with other state of the art 3D acquisition technologies in the resolution and quality of its output, and can be faster and less expensive than 3D laser scanners for digitizing large 3D scenes such as building interiors.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130587528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aerial Lidar Data Classification using AdaBoost","authors":"S. Lodha, D. Fitzpatrick, D. Helmbold","doi":"10.1109/3DIM.2007.10","DOIUrl":"https://doi.org/10.1109/3DIM.2007.10","url":null,"abstract":"We use the AdaBoost algorithm to classify 3D aerial lidar scattered height data into four categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken from lidar data collected over an area of approximately eight square miles, obtaining higher than 92% accuracy. We also apply our classifier to our entire dataset, and present visual classification results both with and without uncertainty. We implement and experiment with several variations within the AdaBoost family of algorithms. We observe that our results are robust and stable over all the various tests and algorithmic variations. We also investigate features and values that are most critical in distinguishing between the classes. This insight is important in extending the results from one geographic region to another.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121269563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Sampling Criterion for Optimizing a Surface Light Field","authors":"Philippe Lambert, J. Deschênes, P. Hébert","doi":"10.1109/3DIM.2007.6","DOIUrl":"https://doi.org/10.1109/3DIM.2007.6","url":null,"abstract":"This paper adopts a sampling perspective to surface light field modeling. This perspective eliminates the need of using the actual object surface in the surface light field definition. Instead, the surface ought to provide only a parameterization of the surface light field function that specifically reduces aliasing artifacts visible at rendering. To find that surface, we propose a new criterion that aims at optimizing the smoothness of the angular distribution of the light rays emanating from each point on the surface. The main advantage of this approach is to be independent of any specific reflectance model. The proposed criterion is compared to widely used criteria found in multi-view stereo and its effectiveness is validated for modeling the appearance of objects having various unknown reflectance properties using calibrated images alone.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Super-Resolution Stereo- and Multi-View Synthesis from Monocular Video Sequences","authors":"S. Knorr, M. Kunter, T. Sikora","doi":"10.1109/3DIM.2007.54","DOIUrl":"https://doi.org/10.1109/3DIM.2007.54","url":null,"abstract":"This paper presents a new approach for generation of super-resolution stereoscopic and multi-view video from monocular video. Such multi-view video is used for instance with multi-user 3D displays or auto-stereoscopic displays with head-tracking to create a depth impression of the observed scenery. Our approach is an extension of the realistic stereo view synthesis (RSVS) approach which is based on structure from motion techniques and image-based rendering to generate the desired stereoscopic views for each point in time. The extension relies on an additional super- resolution mode which utilizes a number of frames of the original video sequence to generate a virtual stereo frame with higher resolution. The algorithm is tested on several TV broadcast videos, as well as on sequences captured with a single handheld camera and sequences from the well known BBC documentation \"Planet Earth\". Finally, some simulation results will show that RSVS is quite suitable for super-resolution 2D-3D conversion.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124037705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Scanning of Haptic Textures and Surface Compliance","authors":"S. Andrews, J. Lang","doi":"10.1109/3DIM.2007.30","DOIUrl":"https://doi.org/10.1109/3DIM.2007.30","url":null,"abstract":"In modern computer graphics, 3D scanning is common practise for the acquisition of the geometry of objects. However, in addition to geometric models, physical models of interaction behaviour are required for the realistic representation of objects in arbitrary environments. In this paper, we introduce a hand-held scanning approach for the acquisition of physical surface texture (roughness) of real- world 3D objects. Our system utilizes a low-cost mobile touch probe and image-based tracking to allow an operator to interactively scan a real-world object and generate estimates of surface texture and compliance. These scans can be integrated into the 3D scanning pipeline, just as colour imagery can be included into the pipeline for the acquisition of visual texture. We demonstrate that the acquired surface properties are of sufficient quality to allow for haptic display of the scanned object.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126991222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized MPU Implicits Using Belief Propagation","authors":"Yi-Ling Chen, S. Lai, Tung-Ying Lee","doi":"10.1109/3DIM.2007.27","DOIUrl":"https://doi.org/10.1109/3DIM.2007.27","url":null,"abstract":"In this paper, we present a new algorithm to reconstruct 3D surfaces from an unorganized point cloud based on generalizing the MPU implicit algorithm through introducing a powerful orientation inference scheme via Belief Propagation. Instead of using orientation information like surface normals, local data distribution analysis is performed to identify the local surface property so as to guide the selection of local fitting models. We formulate the determination of the globally consistent orientation as a graph optimization problem. Local belief networks are constructed by treating the local shape functions as their nodes. The consistency of adjacent nodes linked by an edge is checked by evaluating the functions and an energy is thus defined. By minimizing the total energy over the graph, we can obtain an optimal assignment of labels indicating the orientation of each local shape function. The local inference result is propagated over the model in a front-propagation fashion to obtain the global solution. We demonstrate the performance of the proposed algorithm by showing experimental results on some real-world 3D data sets.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predetermination of ICP Registration Errors And Its Application to View Planning","authors":"Kok-Lim Low, A. Lastra","doi":"10.1109/3DIM.2007.41","DOIUrl":"https://doi.org/10.1109/3DIM.2007.41","url":null,"abstract":"We present an analytical method to estimate the absolute registration error bounds if two surfaces were to be aligned using the ICP (iterative closest point) algorithm. The estimation takes into account (1) the amount of overlap between the surfaces, (2) the noise in the surface points' positions, and (3) the geometric constraint on the 3D rigid-body transformation between the two surfaces. Given a required confidence level, the method of estimation enables us to predetermine the registration accuracy of two overlapping surfaces. This is very useful for automated range acquisition planning where it is important to ensure that the next scan to be acquired can be registered to the previous scans within the desired accuracy. We demonstrate a view-planning system that incorporates our estimation method in the selection of good candidate views for the range acquisition of indoor environments.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122658561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense Depth and Color Acquisition of Repetitive Motions","authors":"Yi Xu, Daniel G. Aliaga","doi":"10.1109/3DIM.2007.20","DOIUrl":"https://doi.org/10.1109/3DIM.2007.20","url":null,"abstract":"Modeling dynamic scenes is a challenging problem faced by applications such as digital content generation and motion analysis. Fast single-frame methods obtain sparse depth samples while multiple- frame methods often reply on the rigidity of the object to correspond a small number of consecutive shots for decoding the pattern by feature tracking. We present a novel structured-light acquisition method which can obtain dense depth and color samples for moving and deformable surfaces undergoing repetitive motions. Our key observation is that for repetitive motion, different views of the same motion state under different structured-light patterns can be corresponded together by image matching. These images densely encode an effectively \"static\" scene with time-multiplexed patterns that we can use for reconstruction of the time- varying scene. At the same time, color samples are reconstructed by matching images illuminated using white light to those using structured-light patterns. We demonstrate our approach using several real-world scenes.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128408427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surround Structured Lighting for Full Object Scanning","authors":"Douglas Lanman, Daniel E. Crispell, G. Taubin","doi":"10.1109/3DIM.2007.57","DOIUrl":"https://doi.org/10.1109/3DIM.2007.57","url":null,"abstract":"This paper presents a new system for acquiring complete 3D surface models using a single structured light projector, a pair of planar mirrors, and one or more synchronized cameras. We project structured light patterns that illuminate the object from all sides (not just the side of the projector) and are able to observe the object from several vantage points simultaneously. This system requires that projected planes of light be parallel, and so we construct an orthographic projector using a Fresnel lens and a commercial DLP projector. A single Gray code sequence is used to encode a set of vertically-spaced light planes within the scanning volume, and five views of the illuminated object are obtained from a single image of the planar mirrors located behind it. Using each real and virtual camera, we then recover a dense 3D point cloud spanning the entire object surface using traditional structured light algorithms. As we demonstrate, this configuration overcomes a major hurdle to achieving full 360 degree reconstructions using a single structured light sequence by eliminating the need for merging multiple scans or multiplexing several projectors.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116020007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Deschênes, Philippe Lambert, S. Perreault, N. Martel-Brisson, Nathaniel Zoso, A. Zaccarin, P. Hébert, Samuel Bouchard, C. Gosselin
{"title":"A Cable-driven Parallel Mechanism for Capturing Object Appearance from Multiple Viewpoints","authors":"J. Deschênes, Philippe Lambert, S. Perreault, N. Martel-Brisson, Nathaniel Zoso, A. Zaccarin, P. Hébert, Samuel Bouchard, C. Gosselin","doi":"10.1109/3DIM.2007.4","DOIUrl":"https://doi.org/10.1109/3DIM.2007.4","url":null,"abstract":"This paper presents the full proof of concept of a system for capturing the light field of an object. It is based on a single high resolution camera that is moved all around the object on a cable-driven end-effector. The main advantages of this system are its scalability and low interference with scene lighting. The camera is accurately positioned along hemispheric trajectories by observing target features. From the set of gathered images, the visual hull is extracted and can be used as an approximate geometry for mapping a surface light field. The paper describes the acquisition system as well as the modeling process. The ability of the system to produce models is validated with four different objects whose sizes range from 20 cm to 3 m.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}