{"title":"Incremental Catmull-Clark subdivision","authors":"Hamid-Reza Pakdel, Faramarz F. Samavati","doi":"10.1109/3DIM.2005.56","DOIUrl":"https://doi.org/10.1109/3DIM.2005.56","url":null,"abstract":"In this paper, a new adaptive method for Catmull-Clark subdivision is introduced. Adaptive subdivision refines specific areas of a model according to user or application needs. Naive adaptive subdivision algorithm changes the connectivity of the mesh, causing geometrical inconsistencies that alter the limit surface. Our method expands the specified region of the mesh such that when it is adaptively subdivided, it produces a smooth surface whose selected area is identical to when the entire mesh is refined. This technique also produces a surface with an increasing level of detail from coarse to fine areas of the surface. We compare our adaptive subdivision with other schemes and present some example applications.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128563707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaussian scale-space dense disparity estimation with anisotropic disparity-field diffusion","authors":"Jangheon Kim, T. Sikora","doi":"10.1109/3DIM.2005.50","DOIUrl":"https://doi.org/10.1109/3DIM.2005.50","url":null,"abstract":"We present a new reliable dense disparity estimation algorithm which employs Gaussian scale-space with anisotropic disparity-field diffusion. This algorithm estimates edge-preserving dense disparity vectors using a diffusive method on iteratively Gaussian-filtered images with a scale, i.e. the Gaussian scale-space. While a Gaussian filter kernel generates a coarser resolution from stereo image pairs, only strong and meaningful boundaries are adoptively selected on the resolution of the filtered images. Then, coarse global disparity vectors are initialized using the boundary constraint. The per-pixel disparity vectors are iteratively obtained by the local adjustment of the global disparity vectors using an energy-minimization framework. The proposed algorithm preserves the boundaries while inner regions are smoothed using anisotropic disparity-field diffusion. In this work, the Gaussian scale-space efficiently avoids illegal matching on a large baseline by the restriction of the range. Moreover, it prevents the computation from iterating into local minima of ill-posed diffusion on large gradient areas e.g. shadow and texture region, etc. The experimental results prove the excellent localization performance preserving the disparity discontinuity of each object.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124711019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatio-temporal fusion of multiple view video rate 3D surfaces","authors":"G. Collins, A. Hilton","doi":"10.1109/3DIM.2005.75","DOIUrl":"https://doi.org/10.1109/3DIM.2005.75","url":null,"abstract":"We consider the problem of geometric integration and representation of multiple views of non-rigidly deforming 3D surface geometry captured at video rate. Instead of treating each frame as a separate mesh we present a representation which takes into consideration temporal and spatial coherence in the data where possible. We first segment gross base transformations using correspondence based on a closest point metric and represent these motions as piecewise rigid transformations. The remaining residual is encoded as displacement maps at each frame giving a displacement video. At both these stages occlusions and missing data are interpolated to give a representation which is continuous in space and time. We demonstrate the integration of multiple views for four different non-rigidly deforming scenes: hand, face, cloth and a composite scene. The approach achieves the integration of multiple-view data at different times into one representation which can processed and edited.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"26 11-12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120892523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic registration of range images based on correspondence of complete plane patches","authors":"Wenfeng He, Wei Ma, H. Zha","doi":"10.1109/3DIM.2005.23","DOIUrl":"https://doi.org/10.1109/3DIM.2005.23","url":null,"abstract":"One of the difficulties in registering two range images scanned by 3D laser scanners is how to get a correct correspondence over the two images automatically. In this paper, we propose an automatic registration method based on matching of extracted planes. First, we introduce a new class of features: complete plane patches (CPP) on the basis of analysis of properties of real scenes. Then we generate a compact interpretation tree for these features. Finally, the image registration is accomplished automatically by searching the interpretation tree.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"21 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120925010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Globally convergent range image registration by graph kernel algorithm","authors":"R. Sára, I. Okatani, A. Sugimoto","doi":"10.1109/3DIM.2005.51","DOIUrl":"https://doi.org/10.1109/3DIM.2005.51","url":null,"abstract":"Automatic range image registration without any knowledge of the viewpoint requires identification of common regions across different range images and then establishing point correspondences in these regions. We formulate this as a graph-based optimization problem. More specifically, we define a graph in which each vertex represents a putative match of two points, each edge represents binary consistency decision between two matches, and each edge orientation represents match quality from worse to better putative match. Then strict sub-kernel defined in the graph is maximized. The maximum strict sub-kernel algorithm enables us to uniquely determine the largest consistent matching of points. To evaluate the quality of a single match, we employ the histogram of triple products that are generated by all surface normals in a point neighborhood. Our experimental results show the effectiveness of our method for rough range image registration.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D registration by textured spin-images","authors":"N. Brusco, M. Andreetto, A. Giorgi, G. Cortelazzo","doi":"10.1109/3DIM.2005.5","DOIUrl":"https://doi.org/10.1109/3DIM.2005.5","url":null,"abstract":"This work is motivated by the desire of exploiting for 3D registration purposes the photometric information current range cameras typically associate to range data. Automatic pairwise 3D registration procedures are two steps procedures with the first step performing an automatic crude estimate of the rigid motion parameters and the second step refining them by the ICP algorithm or some of its variations. Methods for efficiently implementing the first crude automatic estimate are still an open research area. Spin-images are a 3D matching technique very effective in this task. Since spin-images solely exploit geometry information it appears natural to extend their original definition to include texture information. Such an operation can clearly be made in many ways. This work introduces one particular extension of spin-images, called textured spin-images, and demonstrates its performance for 3D registration. It will be seen that textured spin-images enjoy remarkable properties since they can give rigid motion estimates more robust, more precise, more resilient to noise than standard spin-images at a lower computational cost.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122980655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-gradient-guided real-time stereo on graphics hardware","authors":"Minglun Gong, Ruigang Yang","doi":"10.1109/3DIM.2005.55","DOIUrl":"https://doi.org/10.1109/3DIM.2005.55","url":null,"abstract":"We present a real-time correlation-based stereo algorithm with improved accuracy. Encouraged by the success of recent stereo algorithms that aggregate the matching cost based on color segmentation, a novel image-gradient-guided cost aggregation scheme is presented in this paper. The new scheme is designed to fit the architecture of recent graphics processing units (GPUs). As a result, our stereo algorithm can run completely on the graphics board: from rectification, matching cost computation, cost aggregation, to the final disparity selection. Compared with many real-time stereo algorithms that use fixed windows, noticeable accuracy improvement has been obtained without sacrificing realtime performance. In addition, existing global optimization algorithms can also benefit from the new cost aggregation scheme. The effectiveness of our approach is demonstrated with several widely used stereo datasets and live data captured from a stereo camera.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125827350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian modelling of camera calibration and reconstruction","authors":"R. Sundareswara, P. Schrater","doi":"10.1109/3DIM.2005.24","DOIUrl":"https://doi.org/10.1109/3DIM.2005.24","url":null,"abstract":"Camera calibration methods, whether implicit or explicit, are a critical part of most 3D vision systems. These methods involve estimation of a model for the camera that produced the visual input, and subsequently to infer the 3D structure that gave rise to the input. However, in these systems the error in calibration is typically unknown, or if known, the effect of calibration error on subsequent processing (e.g. 3D reconstruction) is not accounted for. In this paper, we propose a Bayesian camera calibration method that explicitly computes calibration error, and we show how knowledge of this error can be used to improve the accuracy of subsequent processing. What distinguishes the work is the explicit computation of a posterior distribution on unknown camera parameters, rather than just a best estimate. Marginalizing (averaging) subsequent estimates by this posterior is shown to reduce reconstruction error over calibration approaches that rely on a single best estimate. The method is made practical using sampling techniques, that require only the evaluation of the calibration error function and the specification of priors. Samples with their corresponding probability weights can be used to produce better estimates of the camera parameters. Moreover, these samples can be directly used to improve estimates that rely on calibration information, like 3D reconstruction. We evaluate our method using simulated data for a structure from motion problem, in which the same point matches are used to calibrate the camera, estimate the motion, and reconstruct the 3D geometry. Our results show improved reconstruction over non-linear Camera calibration methods like the Maximum Likelihood estimate. Additionally, this approach scales much better in the face of increasingly noisy point matches.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133119428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fitting of 3D circles and ellipses using a parameter decomposition approach","authors":"Xiaoyi Jiang, D. Cheng","doi":"10.1109/3DIM.2005.46","DOIUrl":"https://doi.org/10.1109/3DIM.2005.46","url":null,"abstract":"Many optimization processes encounter a problem in efficiently reaching a global minimum or a near global minimum. Traditional methods such as Levenberg-Marquardt algorithm and trust-region method face the problems of dropping into local minima as well. On the other hand, some algorithms such as simulated annealing and genetic algorithm try to find a global minimum but they are mostly time-consuming. Without a good initialization, many optimization methods are unable to guarantee a global minimum result. We address a novel method in 3D circle and ellipse fitting, which alleviates the optimization problem. It can not only increase the probability of getting in global minima but also reduce the computation time. Based on our previous work, we decompose the parameters into two parts: one part of parameters can be solved by an analytic or a direct method and another part has to be solved by an iterative procedure. Via this scheme, the topography of optimization space is simplified and therefore, we reduce the number of local minima and the computation time. We experimentally compare our method with the traditional ones and show superior performance.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"519 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116260866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. J. King, Tomasz Malisiewicz, C. Stewart, R. Radke
{"title":"Registration of multiple range scans as a location recognition problem: hypothesis generation, refinement and verification","authors":"B. J. King, Tomasz Malisiewicz, C. Stewart, R. Radke","doi":"10.1109/3DIM.2005.68","DOIUrl":"https://doi.org/10.1109/3DIM.2005.68","url":null,"abstract":"This paper addresses the following version of the multiple range scan registration problem. A scanner with an associated intensity camera is placed at a series of locations throughout a large environment; scans are acquired at each location. The problem is to decide automatically which scans overlap and to estimate the parameters of the transformations aligning these scans. Our technique is based on (1) detecting and matching keypoints - distinctive locations in range and intensity images, (2) generating and refining a transformation estimate from each keypoint match, and (3) deciding if a given refined estimate is correct. While these steps are familiar, we present novel approaches to each. A new range keypoint technique is presented that uses spin images to describe holes in smooth surfaces. Intensity keypoints are detected using multiscale filters, described using intensity gradient histograms, and backprojected to form 3D keypoints. A hypothesized transformation is generated by matching a single keypoint from one scan to a single keypoint from another, and is refined using a robust form of the ICP algorithm in combination with controlled region growing. Deciding whether a refined transformation is correct is based on three criteria: alignment accuracy, visibility, and a novel randomness measure. Together these three steps produce good results in test scans of the Rensselaer campus.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124115928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}