{"title":"A Comparative Analysis of Depth-Discontinuity and Mixed-Pixel Detection Algorithms","authors":"P. Tang, Daniel F. Huber, B. Akinci","doi":"10.1109/3DIM.2007.5","DOIUrl":"https://doi.org/10.1109/3DIM.2007.5","url":null,"abstract":"Laser scanner measurements are corrupted by noise and artifacts that can undermine the performance of registration, segmentation, surface reconstruction, recognition, and other algorithms operating on the data. While much research has addressed laser scanner noise models, comparatively little is known about other artifacts, such as the mixed pixel effect, color-dependent range biases, and specular reflection effects. This paper focuses on the mixed pixel effect and the related challenge of detecting depth discontinuities in 3D data. While a number of algorithms have been proposed for detecting mixed pixels and depth discontinuities, there is no consensus on how well such algorithms perform or which algorithm performs best. This paper presents a comparative analysis of five mixed-pixel/discontinuity detection algorithms on real data sets. We find that an algorithm based on the surface normal angle has the best overall performance, but that no algorithm performs exceptionally well. Factors influencing algorithm performance are also discussed.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114517900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Gool, B. Leibe, Pascal Müller, M. Vergauwen, T. Weise
{"title":"3D Challenges and a Non-In-Depth Overview of Recent Progress","authors":"L. Gool, B. Leibe, Pascal Müller, M. Vergauwen, T. Weise","doi":"10.1109/3DIM.2007.1","DOIUrl":"https://doi.org/10.1109/3DIM.2007.1","url":null,"abstract":"Although a lot of effort already went into the development of 3D acquisition technology, and existing methods come of age, several challenges remain. We try to give a - probably incomplete - overview of these. Then, some of our recent work at ETH Zurich and the University of Leuven is discussed, where we try to tackle such outstanding issues.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"51 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114144472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cached k-d tree search for ICP algorithms","authors":"A. Nüchter, K. Lingemann, J. Hertzberg","doi":"10.1109/3DIM.2007.15","DOIUrl":"https://doi.org/10.1109/3DIM.2007.15","url":null,"abstract":"The ICP (iterative closest point) algorithm is the de facto standard for geometric alignment of three-dimensional models when an initial relative pose estimate is available. The basis of ICP is the search for closest points. Since the development of ICP, k-d trees have been used to accelerate the search. This paper presents a novel search procedure, namely cached k-d trees, exploiting iterative behavior of the ICP algorithm. It results in a significant speedup of about 50% as we show in an evaluation using different data sets.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121912162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examplar-based Shape from Shading","authors":"Xinyu Huang, Jizhou Gao, Liang Wang, Ruigang Yang","doi":"10.1109/3DIM.2007.24","DOIUrl":"https://doi.org/10.1109/3DIM.2007.24","url":null,"abstract":"Traditional Shape-from-Shading (SFS) techniques aim to solve an under-constrained problem: estimating depth map from one single image. The results are usually brittle from real images containing detailed shapes. Inspired by recent advances in texture synthesis, we present an exemplar-based approach to improve the robustness and accuracy of SFS. In essence, we utilize an appearance database synthesized from known 3D models where each image pixel is associated with its ground-truth normal. The input image is compared against the images in the database to find the most likely normals. The prior knowledge from the database is formulated as an additional cost term under an energy minimization framework to solve the depth map. Using a generic small database consisting of 50 spheres with different radius, our approach has demonstrated its capability to obviously improve the reconstruction quality from both synthetic and real images with different shapes, in particular those with small details.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130422803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Sensor for Simultaneously Capturing Texture and Shape by Projecting Structured Infrared Light","authors":"Kiyotaka Akasaka, R. Sagawa, Y. Yagi","doi":"10.1109/3DIM.2007.7","DOIUrl":"https://doi.org/10.1109/3DIM.2007.7","url":null,"abstract":"Simultaneous capture of the texture and shape of a moving object in real time is expected to be applicable to various fields including virtual reality and object recognition. Two difficulties must be overcome to develop a sensor able to achieve this feature: fast capturing of shape and the simultaneous capture of texture and shape. One-shot capturing methods based on projecting colored structured lights have already been proposed to obtain shape at a high frame rate. However, since these methods used visible lights, it is impossible to capture texture and shape simultaneously. In this paper, we propose a method that uses projected infrared structured light. Since the proposed method uses visible light for texture and infrared light for shape, simultaneous capturing can be achieved. In addition, a system was developed that maps texture on to the captured shape without occlusion by placing the cameras for visible and infrared lights coaxially.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Reverse Engineering of Free Form Objects Using Morse Theory","authors":"J. Branch, F. Prieto, P. Boulanger","doi":"10.1109/3DIM.2007.12","DOIUrl":"https://doi.org/10.1109/3DIM.2007.12","url":null,"abstract":"In this paper, a method for surface reconstruction by means of optimized NURBS (non-uniform rational b-splines) patches from complex quadrilateral bases on triangulated surfaces of arbitrary topology is proposed. To decompose the triangulated surface into quadrilateral patches, Morse theory and spectral mesh analysis are used. The quadrilateral regions obtained from this analysis is then regularized by computing the geodesic curves between each corner of the quadrilateral regions. These geodesies are then fitted by a B-splines curves creating a quadrilateral network on which a NURBS surface is fitted. The NURBS surfaces are then optimized using evolutive strategies to guaranty the best fit as well as C1 continuity between the patches.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124976863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-organizing Deformable Model for Mapping 3D Object Model onto Arbitrary Target Surface","authors":"K. Morooka, Shun Matsui, H. Nagahashi","doi":"10.1109/3DIM.2007.46","DOIUrl":"https://doi.org/10.1109/3DIM.2007.46","url":null,"abstract":"This paper presents a new technique for projecting a 3D object mesh model onto a surface of another target object. The mesh model adapts its shape to the target surface, and is called Self-organizing Deformable Model(SDM). The SDM algorithm works by combining a competitive learning and an energy minimization. The framework of the SDM makes it possible to map a mesh model onto various kinds of target surfaces. This characteristic can not be seen in other methods for surface parameterization, and it enables us to apply the SDM to some different fields in computer vision and computer graphics. Also the SDM can reconstruct shapes of target objects similar to general deformable models.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time full-field 3-D surface-shape measurement using off-the-shelf components and a single processor","authors":"Peirong Jia, J. Kofman, C. English","doi":"10.1109/3DIM.2007.43","DOIUrl":"https://doi.org/10.1109/3DIM.2007.43","url":null,"abstract":"Phase-shifting fringe-projection methods have been increasingly used for three-dimensional (3D) object surface modeling to permit full-field measurement. This paper presents a real-time full-field high-resolution 3D surface-shape measurement system implemented with an efficient 3D shape measurement pipeline and triangular-pattern phase-shifting based on off-the-shelf components, software synchronization and a single computer-processor. The system projects computer-generated fringe patterns with a triangular intensity profile onto an object via a Digital Light Processing (DLP) projector. The projected patterns are electronically shifted and a CCD camera synchronized with the DLP projector by software captures the images from another direction. The captured images are processed by a single computer to reconstruct the 3D shape using triangular-pattern phase-shifting algorithms, and the model is displayed in real time. The 3D shape acquisition system achieved a speed of 5.6 fps for an image size of 648times494 pixels using the two-step triangular-pattern phase-shifting method, without any hardware synchronization or dual processing.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124980564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense 3D Reconstruction method using Coplanarities and Metric Constraints for Line Laser Scanning","authors":"Hiroshi Kawasaki, Furukawa Ryo","doi":"10.1109/3DIM.2007.19","DOIUrl":"https://doi.org/10.1109/3DIM.2007.19","url":null,"abstract":"In this paper, we propose a novel method to achieve both dense 3D reconstruction of the scene and estimation of the camera intrinsic parameters by using coplanarities and other constraints (e.g. orthogonalities or parallelisms) derived from relations between planes in the scene and reflected curves of line lasers captured by a single camera. In our study, we categorize coplanarities in the scene into two types: implicit coplanarities, which can be observed as reflected curves of line lasers, and explicit coplanarities, which are, for example, observed as walls of a building. By using both types of coplanarities, we can construct simultaneous equations and can solve them up to four degrees of freedom. To upgrade the solution to the Euclidean space and estimate the camera intrinsic parameters, we can use metric constraints such as orthogonalities of the planes. Such metric constraints are given by, for example, observing the corners of rectangular boxes in the scene, or using special laser projecting device composed of two line lasers whose laser planes are configured to be perpendicular.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Chapdelaine-Couture, M. Langer, A. Caine, R. Mann
{"title":"Two-frame frequency-based estimation of local motion parallax direction in 3D cluttered scenes","authors":"V. Chapdelaine-Couture, M. Langer, A. Caine, R. Mann","doi":"10.1109/3DIM.2007.60","DOIUrl":"https://doi.org/10.1109/3DIM.2007.60","url":null,"abstract":"When an observer moves in a 3D static scene, the resulting motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. It is well-known that the vector difference - or motion parallax - between nearby image motion field vectors points toward the direction of heading and so computing this vector difference can help in estimating the heading direction. For 3D cluttered scenes that contain many objects at many different depths, it can be difficult to compute local image motion vectors because these scenes have many depth discontinuities which corrupt local motion estimates and thus it is unclear how to estimate local motion parallax. Recently a frequency domain method was proposed to address this problem which uses the space-time power spectrum of a sequence of images. The method requires a large number of frames, however, and assumes the observer's motion is constant within these frames. Here we present a frequency-based method which uses two frames only and hence does not suffer from the limitations of the previously proposed method. We demonstrate the effectiveness of the new method using both synthetic and natural images.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121029717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}