{"title":"Combining off- and on-line calibration of a digital camera","authors":"M. Urbanek, R. Horaud, P. Sturm","doi":"10.1109/IM.2001.924413","DOIUrl":"https://doi.org/10.1109/IM.2001.924413","url":null,"abstract":"We introduce a novel outlook on the self-calibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior off-line calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a close-form solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced -to one-number of critical camera configurations, associated with it. Moreover we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is \"easy\" to match with the other two.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D reconstruction from two orthogonal views using simulated annealing approach","authors":"J. Ning, S. McClean, K. Cranley","doi":"10.1109/IM.2001.924465","DOIUrl":"https://doi.org/10.1109/IM.2001.924465","url":null,"abstract":"The technique of 3-dimensional (3D) reconstruction from two orthogonal images is mainly used in ventricle or vessel reconstruction. In the literature, the 3D object is considered to be a stacked 2-dimensional (2D) slice set and 3D reconstruction can be achieved by constructing a stack of 2D slice reconstructions, by rendering each slice into two 1-dimensional profiles corresponding to a pair of rows obtained from the segmented projections. Previous work has modelled each slice as a 2D Markov-Gibbs random field and simulated annealing approach is used to solve the 2D reconstruction problem based on two infinite-source orthogonal views. The current work applies this approach to 3D space in particular. We find this approach can be applied to more wide application, rather than only in cardiac reconstruction. The results obtained by reconstructing binary and multiple objects of the human voxel phantom from two orthogonal projections, yield low reconstruction errors and the shapes of the reconstructed objects are observed to match the original objects to a high degree.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130735151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D capture for computer graphics","authors":"H. Rushmeier","doi":"10.1109/IM.2001.924481","DOIUrl":"https://doi.org/10.1109/IM.2001.924481","url":null,"abstract":"We examine the use of 3D scanned objects in computer graphics applications. We consider the requirements for the types and resolution of data required. We also identify some outstanding research issues in this area.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114327582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tolerance control with high resolution 3D measurements","authors":"F. Prieto, T. Redarce, P. Boulanger, R. Lepage","doi":"10.1109/IM.2001.924473","DOIUrl":"https://doi.org/10.1109/IM.2001.924473","url":null,"abstract":"The use of a laser range sensor in the 3D digitalization process for inspection tasks allows very significant improvement in acquisition speed and in 3D points density but does not attain the accuracy obtained with a Coordinate Measuring Machine (CMM). Inspection consists in verifying the accuracy of a part related to a given set of tolerances. It is thus necessary that the 3D measurements be more accurate than the tolerance range. In the 3D capture of a part, several sources of error can alter the measured values. So, we have to find and model the effect of the most influential parameters affecting the accuracy of the range sensor in the digitalization process. This model is used to provide a sensing plan to acquire completely and accurately the geometry of an object. The sensing plan is composed of the set of viewpoints, each of which defines the exact position and orientation of the camera relative to the part. The 3D cloud obtained from the sensing plan is registered with the CAD model of the part and then segmented according to the different surfaces. Segmentation results are used to check tolerances of the part. We propose in this paper a methodology for geometrical inspection that uses the segmented 3D data related to the surface of interest and the CAD model of the part.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125898394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-dimensional shape modeling with extended hyperquadrics","authors":"Tsuneo Saito, M. Ohuchi","doi":"10.1109/IM.2001.924449","DOIUrl":"https://doi.org/10.1109/IM.2001.924449","url":null,"abstract":"The shape representation and modeling based on implicit functions have received considerable attention in computer vision literature. In this paper, we propose extended hyperquadrics, as a generalization of hyperquadrics developed by Hanson, for modeling global geometric shapes. The extended hyperquadrics can strengthen the representation power of hyperquadrics, especially for the object with concavities. We discuss the distance measures between extended hyperquadric surfaces and given data set and their minimization to obtain the optimum model parameters. We present several experimental results for fitting extended hyperquadrics to 3D real and synthetic data. We demonstrate that extended hyperquadrics can model more complex shapes than hyperquadrics, maintaining many desirable properties of hyperquadrics.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127562820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic 3D modeling using range images obtained from unknown viewpoints","authors":"Daniel F. Huber","doi":"10.1109/IM.2001.924424","DOIUrl":"https://doi.org/10.1109/IM.2001.924424","url":null,"abstract":"In this paper, we present a method for automatically creating a 3D model of a scene from a set of range images obtained from unknown viewpoints. Existing 3D modeling approaches require manual interaction or rely on mechanical methods to estimate the viewpoints. Given a set of range images (views), we use a surface matching system to exhaustively register all pairs of views. The results are verified for consistency, but some incorrect matches may be locally undetectable and correct matches may be missed. We then construct a consistent model from these potentially faulty matches using a global consistency criterion to eliminate incorrect, but locally consistent, matches. The procedure is demonstrated through an application called hand-held modeling, in which a 3D model is automatically created by scanning an object held in a person's hand.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128119417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliable 3D surface acquisition, registration and validation using statistical error models","authors":"J. Guehring","doi":"10.1109/IM.2001.924440","DOIUrl":"https://doi.org/10.1109/IM.2001.924440","url":null,"abstract":"We present a complete data acquisition and processing chain for the reliable inspection of industrial parts considering anisotropic noise. Data acquisition is performed with a stripe projection system that was modeled and calibrated using photogrammetric techniques. Covariance matrices are attached individually to points during 3D coordinate computation. Different datasets are registered using a new multi-view registration technique. In the validation step, the registered datasets are compared with the CAD model to verify that the measured part meets its specification. While previous methods have only considered the geometrical discrepancies between the sensed part and its CAD model, we also consider statistical information to decide whether the differences are significant.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affine transformations of 3D objects represented with neural networks","authors":"Emmanouil Piperakis, I. Kumazawa","doi":"10.1109/IM.2001.924438","DOIUrl":"https://doi.org/10.1109/IM.2001.924438","url":null,"abstract":"An experiment is conducted to prove that multilayer feed-forward neural networks are capable of representing most classes of 3D objects, used in computer graphics. Furthermore, simple affine transformations were applied on those objects showing that modeling is possible using this type of representation. One network is used per one volumetric description of a 3D object. The neural network employed, is a function that takes as inputs 3D coordinates in object space and produces as output a value that indicates if the point belongs to the object or not. The representation method is tested by repeated evaluations of the network for points inside the object space. Objects that have a simple analytical form, e.g. a sphere or a cube, are represented by specifying the networks' parameters manually. For objects with more complicated shapes we generate training examples. These training examples consist of points on the objects' surface and points lying on inclosed and enclosing surfaces. The algorithm for generating the training data, is a simple heuristic that uses the surface normal to determine whether a point in the vicinity of the surface belongs to the inside or the outside of the object. The network is finally trained on the generated examples, using the back propagation technique. The experimental results prove that this representation method is accurate and compact. Feedforward neural networks being hardware implementable offer the ability for a faster representation. This paper is the second step, on a series of ideas, towards creating a real time 3D renderer based entirely on neural networks.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115885956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust recognition and pose determination of 3-D objects using range images in eigenspace approach","authors":"D. Skočaj, A. Leonardis","doi":"10.1109/IM.2001.924428","DOIUrl":"https://doi.org/10.1109/IM.2001.924428","url":null,"abstract":"In this paper we propose a robust method for recognition and pose determination of 3-D objects using range images in the eigenspace approach. Instead of computing the coefficients by a projection of the data onto the eigenimages, we determine the coefficients by solving a set of linear equations in a robust manner. The method efficiently overcomes the problem of missing pixels, noise and occlusions in range images. The results show that the proposed method outperforms the standard one in recognition and pose determination.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116457842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of elastic constants from 3D range-flow","authors":"J. Lang, D. Pai","doi":"10.1109/IM.2001.924471","DOIUrl":"https://doi.org/10.1109/IM.2001.924471","url":null,"abstract":"This paper shows how range-flow can help to estimate the elastic constants of complete objects. In our framework, the object is deformed actively by a robotic device pushing into the object. The robot senses the contact force and surface displacement at the point of contact. This contact object behavior alone is not sufficient to estimate elastic constants. The displacement of the object's non-contacted surface needs to be taken into account. This paper presents a method to estimate surface displacement from range-flow calculated with trinocular stereo imagery. The observed object behavior allows us to estimate the linear elastic material constants of the object. To this end we will introduce the boundary element method as a modeling tool for deformable objects in 3D imaging. The boundary element method is a full discrete continuum mechanics model.","PeriodicalId":155451,"journal":{"name":"Proceedings Third International Conference on 3-D Digital Imaging and Modeling","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126827331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}