{"title":"Scaling up multiprojector immersive displays: the LightTwist project","authors":"S. Roy","doi":"10.1109/3DIM.2007.45","DOIUrl":"https://doi.org/10.1109/3DIM.2007.45","url":null,"abstract":"Summary form only given. As video projectors are becoming less expensive, we see more and more use of multiple projectors to create a single coherent image, the so-called multi- projectors displays. They provide higher resolution, larger screen size as well as more flexible screen shape. However, they must be carefully aligned and calibrated and the media, video as well as 3D, becomes harder to manage and display. By using a camera, it is possible to automatically align geometrically and photometrically the projectors using structured light. The LightTwist project aims at providing this alignment as well as media spatialization and synchronization and projection, in a real context, i.-e. outside the controlled environment of the research lab. In an attempt to truly free the artists from the limitations inherent to single projectors, this project works with common hardware and imposes very few constraints on the projector-screen geometry while simplifying the use of a large number of projectors. For cylindrical or spherical immersion the number of projectors tends to increase, thereby increasing the ratio of projected pixels to cameras pixels. This makes the accurate inversion of the camera-projector mapping a real challenge, especially in regions of overlap between the projectors. This \"pixel ratio problem\", as well as other issues such as camera calibration, real-time performance, and synchronization with sound, have been tested in this project and will be presented as well as examples of real installations.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123357527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Range Image Segmentation for Modeling and Object Detection in Urban Scenes","authors":"C. Chen, I. Stamos","doi":"10.1109/3DIM.2007.42","DOIUrl":"https://doi.org/10.1109/3DIM.2007.42","url":null,"abstract":"We present fast and accurate segmentation algorithms of range images of urban scenes. The utilization of these algorithms is essential as a pre-processing step for a variety of tasks, that include 3D modeling, registration, or object recognition. The accuracy of the segmentation module is critical for the performance of these higher-level tasks. In this paper, we present a novel algorithm for extracting planar, smooth non-planar, and non-smooth connected segments. In addition to segmenting each individual range image, our methods also merge registered segmented images. That results in coherent segments that correspond to urban objects (such as facades, windows, ceilings, etc.) of a complete large scale urban scene. We present results from experiments of one exterior scene (Cooper Union building, NYC) and one interior scene (Grand Central Station, NYC).","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124924459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Construction of Feature Landmark Database Using Omnidirectional Videos and GPS Positions","authors":"Sei Ikeda, Tomokazu Sato, K. Yamaguchi, N. Yokoya","doi":"10.1109/3DIM.2007.16","DOIUrl":"https://doi.org/10.1109/3DIM.2007.16","url":null,"abstract":"This paper describes a method for constructing feature landmark database using omnidirectional videos and GPS positions acquired in outdoor environments. The feature landmark database is used to estimate camera positions and postures for various applications such as augmented reality systems and self-localization of robots and automobiles. We have already proposed a camera position and posture estimation method using landmark database that stores 3D positions of sparse feature points with their view-dependent image templates. For large environments, the cost for construction of landmark database is high because conventional 3-D reconstruction methods requires measuring some absolute positions of feature points manually to suppress accumulative estimation errors in structure-from-motion process. To achieve automatic construction of landmark database for large outdoor environments, we newly propose a method that constructs database without manual specification of features using omnidirectional videos and GPS positions.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124642168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-View Edge-based Stereo by Incorporating Spatial Coherence","authors":"Gang Li, Yakup Genç, S. Zucker","doi":"10.1109/3DIM.2007.35","DOIUrl":"https://doi.org/10.1109/3DIM.2007.35","url":null,"abstract":"A limitation of the state-of-art multi-view reconstruction algorithms is in their ability to handle scenes with very little texture or a lot of clutter. Texture-less scenes with clutter are traditionally difficult for dense multi-view stereo methods. Feature-based stereo algorithms, in particular, edge- based methods, are better suited for these scenes. The success of the edge-based reconstruction heavily depends on the amount of the prior information used. This paper introduces an algorithm that extends the well-known plane- sweep algorithm by incorporating spatial coherence of the scene. The method utilizes the geometric consistency derived from the continuity of three-dimensional edge points as a geometric constraint in addition to previously known multi-view image constraints. Experimental results on synthetic as well as real data are provided to demonstrate the efficacy and robustness of the proposed method.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132781975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study of Shape Similarity for Temporal Surface Sequences of People","authors":"Peng Huang, J. Starck, A. Hilton","doi":"10.1109/3DIM.2007.8","DOIUrl":"https://doi.org/10.1109/3DIM.2007.8","url":null,"abstract":"The problem of 3D shape matching is typically restricted to static objects to classify similarity for shape retrieval. In this paper we consider 3D shape matching in temporal sequences where the goal is instead to find similar shapes for a single time-varying object, here the human body. Local- feature distribution descriptors are adopted to provide a rich object description that is invariant to changes in surface topology. Two contributions are made, (i) a comparison of descriptors for shape similarity in temporal sequences of a dynamic free-form object and (ii) a quantitative evaluation based on the Receiver-Operator Characteristic (ROC) curve for the descriptors using a ground-truth data set for synthetic motion sequences. Shape Distribution [25], Spin Image [15], Shape Histogram [1] and Spherical Harmonic [17] descriptors are compared. The highest performance is obtained by volume-sampling shape-histogram descriptors. The descriptors also demonstrate relative in- sensitivity to parameter setting. The application is demonstrated in captured sequences of 3D human surface motion.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Kurazume, Yukihiro Tobata, Y. Iwashita, T. Hasegawa
{"title":"3D laser measurement system for large scale architectures using multiple mobile robots","authors":"R. Kurazume, Yukihiro Tobata, Y. Iwashita, T. Hasegawa","doi":"10.1109/3DIM.2007.2","DOIUrl":"https://doi.org/10.1109/3DIM.2007.2","url":null,"abstract":"In order to construct three dimensional shape models of large scale architectures by a laser range finder, a number of range images are normally taken from various viewpoints and these images are aligned using post-processing procedure such as ICP algorithm. However in general, before applying ICP algorithm, these range images have to be registered to correct positions roughly by a human operator in order to converge to precise positions. In addition, range images must be overlapped sufficiently each other by taking dense images from close viewpoints. On the other hand, if poses of the laser range finder at viewpoints can be identified precisely, local range images can be converted to the world coordinate system directly with simple transformation calculation. This paper proposes a new measurement system for large scale architectures using a group of multiple robots and an on-board laser range finder. Each measurement position is identified by the highly precise positioning technique named cooperative positioning system or CPS which utilizes the characteristics of multiple robots system. The proposed system can construct 3D shapes of large scale architectures without any post-processing procedure such as ICP algorithm and dense range measurements. The measurement experiments in unknown and large indoor/outdoor environments are successfully carried out using the newly developed measurement system consisting of three mobile robots named CPS-V.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122601825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape-merging and interpolation using class estimation for unseen voxels with a GPU-based efficient implementation","authors":"Furukawa Ryo, Tomoya Itano, Akihiko Morisaka, Hiroshi Kawasaki","doi":"10.1109/3DIM.2007.47","DOIUrl":"https://doi.org/10.1109/3DIM.2007.47","url":null,"abstract":"The merging of multiple range images obtained by 3D measurement systems for generating a single polygon mesh, and processing for filling holes caused by unmeasured data or insufficient range images are essential processes for CAD, digital archiving of shapes, and CG rendering. Many of the existing processes that have been proposed for merging and interpolating multiple shapes do not function well when the number of range images is small. In this paper, the space carving method is improved, and an interpolation algorithm is proposed which is capable of producing stable results even when the number of range images is small. In the proposed method, not only the observed voxels in a signed distance field, but also unseen voxels are determined as either inside or outside of an object using Bayes estimation. Characteristics of the proposed method include that closed surfaces are always obtained, and a GPU-based, efficient implementation is possible. In addition, in the case that the range image is obtained using an active stereo method, high precision estimation results can be achieved by using information regarding the light sources.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127157156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stratified Self-calibration and Metric Reconstruction of a Trinocular Structured Light Vision System","authors":"Y. Li, Suk-Yun Lee","doi":"10.1109/3DIM.2007.53","DOIUrl":"https://doi.org/10.1109/3DIM.2007.53","url":null,"abstract":"A trinocular structured light vision system consists of a projector and two cameras. The using of two cameras instead of only one camera is to handle erroneous matching caused by reflection and blurring. A trinocular active vision system is useful in 3D reconstruction for its robustness and accuracy. One of the most difficult tasks in using such a system is the self-calibration of the system. In this paper a self-calibration and metric reconstruction algorithm for a trinocular structured light vision system is proposed. It is shown that the vanishing lines of the stripe planes in the camera view can be identified from the homographys induced by the planes and the geometry of the projective pattern. With the knowledge of the plane at infinity, the intrinsic and extrinsic parameters of both the cameras and the projector can be identified. Experimental results with real images are presented, demonstrating the accuracy and robustness of the proposed algorithm.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123262754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Light stripe triangulation for multiple of moving rigid objects","authors":"Takuya Funatomi, M. Iiyama, K. Kakusho, M. Minoh","doi":"10.1109/3DIM.2007.32","DOIUrl":"https://doi.org/10.1109/3DIM.2007.32","url":null,"abstract":"In this paper, we propose an extension of light stripe triangulation for multiple of moving rigid objects. With traditional light stripe triangulation, the acquired shape of moving object would be distorted. If the subject is a rigid object, we can correct the distortion in the acquired shape based on its motion. However, when the subject consists of multiple rigid objects that move differently one another, we need to segment the acquired shape into each object before correcting the distortion. When objects move largely, the segmentation becomes difficult without distortion correction since objects' motion would significantly distort acquired shape. We propose a method for acquiring distortion-free shape of each object by performing segmentation after correcting distortion. When we correct the distortion based on the motion of one of the objects before segmentation, corrected shape would include not only true shape of the object but also false shape of the other objects. We perform segmentation as detecting such false shape by using silhouettes of the whole subject at some moments. We can acquire true shape of each object by eliminating detected false shape from the corrected shape. We validate proposed method and evaluate the accuracy by experiment. We also demonstrate an articulated hand model which acquired by applying proposed method for measuring the real hand under assuming that the hand consists of 18 rigid objects.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126638548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling and Calibration of Coupled Fish-Eye CCD Camera and Laser Range Scanner for Outdoor Environment Reconstruction","authors":"X. Brun, F. Goulette","doi":"10.1109/3DIM.2007.34","DOIUrl":"https://doi.org/10.1109/3DIM.2007.34","url":null,"abstract":"Precise and realistic models of outdoor environments such as cities and roads are useful for various applications. In order to do so, geometry and photography of environments must be captured. We present in this paper a coupled system, based on a fish-eye lens CCD camera and a laser range scanner, aimed at capturing color and geometry in this context. To use this system, a revelant model and a accurate calibration method are presented. The calibration method uses a simplified fish-eye model; the method uses only one image for fish-eye parameters, and avoids the use of large calibration pattern as required in others methods. The validity and precision of the method are assessed and example of colored 3D points produced by the system is presented.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125795624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}