{"title":"Wide baseline stereo matching","authors":"P. Pritchett, Andrew Zisserman","doi":"10.1109/ICCV.1998.710802","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710802","url":null,"abstract":"The objective of this work is to enlarge the class of camera motions for which epipolar geometry and image correspondences can be computed automatically. This facilitates matching between quite disparate views-wide baseline stereo. Two extensions are made to the current small baseline algorithms: first, and most importantly, a viewpoint invariant measure is developed for assessing the affinity of corner neighbourhoods over image pairs; second, algorithms are given for generating putative corner matches between image pairs using local homographies. Two novel infrastructure developments are also described: the automatic generation of local homographies, and the combination of possibly conflicting sets of matches prior to RANSAC estimation. The wide baseline matching algorithm is demonstrated on a number of image pairs with varying relative motion, and for different scene types. All processing is automatic.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115113470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recognition of 3D free-form objects using segment-based stereo vision","authors":"Y. Sumi, Y. Kawai, T. Yoshimi, F. Tomita","doi":"10.1109/ICCV.1998.710789","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710789","url":null,"abstract":"We propose a new method to recognize 3D free-form objects from their apparent contours. It is the extension of our established method to recognize objects with fixed edges. Object models are compared with 3D boundaries which are extracted by segment-based stereo vision. Based on the local shapes of the boundaries, candidate transformations are generated. The candidates are verified and adjusted based on the whole shapes of the boundaries. The models are built from all-around range data of the objects. Experimental results show the effectiveness of the method.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124913230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic generation of robot program code: learning from perceptual data","authors":"M. Yeasin, S. Chaudhuri","doi":"10.1109/ICCV.1998.710822","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710822","url":null,"abstract":"We propose a novel approach to program a robot by demonstrating the task multiple number of times in front of a vision system. Here we integrate human dexterity with sensory data using computer vision techniques in a single platform. A simultaneous feature detection and tracking framework is used to track various features (finger tips and the wrist joint). A Kalman filter does the tracking by predicting the tentative feature location and a HOS-based data clustering algorithm extracts the feature. Color information of the features are used for establishing correspondences. A fast, efficient and robust algorithm for the vision system thus developed process a binocular video sequence to obtain the trajectories and the orientation information of the end effector. The concept of a trajectory bundle is introduced to avoid singularities and to obtain an optimal path.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"35 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124987112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion estimation in image sequences using the deformation of apparent contours","authors":"Kalle Åström, Fredrik Kahl","doi":"10.1109/ICCV.1998.710829","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710829","url":null,"abstract":"The problem of determining the camera motion from apparent contours or silhouettes of curved three-dimensional surfaces is considered. In a sequence of images is shown how to use the generalized epipolar constraint on apparent contours. One such constraint is obtained for each epipolar tangency point in each image pair. Thus in theory the motion can be calculated from the deformation of a single contour. A robust algorithm for computing the motion is presented based on the maximum likelihood estimate. It is shown how to generate initial estimates on the camera motion using only the tracked contours. It is also shown how to improve this estimate by maximizing the likelihood function. The algorithm has been tested on real image sequences. The result is compared to that of using only point features. The statistical evaluation shows that the technique gives accurate and stable results.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126619609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A framework for modeling appearance change in image sequences","authors":"Michael J. Black, David J. Fleet, Y. Yacoob","doi":"10.1109/ICCV.1998.710788","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710788","url":null,"abstract":"Image \"appearance\" may change over time due to a variety of causes such as: 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) \"iconic changes\" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these \"appearance changes\" in an image sequence as a \"mixture\" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126629925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recovering epipolar geometry by reactive tabu search","authors":"Qifa Ke, Gang Xu, Songde Ma","doi":"10.1109/ICCV.1998.710804","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710804","url":null,"abstract":"In this paper we propose a new approach to recovering epipolar geometry from a pair of uncalibrated images. We first detect the feature points. By minimizing a proposed cost function, we match the feature points, discard the outliers and recover the epipolar geometry in one step. Experiments on real images show that this approach is effective and fast.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114435020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimizing algebraic error in geometric estimation problems","authors":"R. Hartley","doi":"10.1109/ICCV.1998.710760","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710760","url":null,"abstract":"This paper gives a widely applicable technique for solving many of the parameter estimation problems encountered in geometric computer vision. A commonly used approach is to minimize an algebraic error function instead of a possibly preferable geometric error function. It is claimed in this paper that minimizing algebraic error will usually give excellent results, and in fact the main problem with most algorithms minimizing algebraic distance is that they do not take account of mathematical constraints that should be imposed on the quantity being estimated. This paper gives an efficient method of minimizing algebraic distance while taking account of the constraints. This provides new algorithms for the problems of resectioning a pinhole camera, computing the fundamental matrix, and computing the tri-focal tensor. Evaluation results are given for the resectioning and tri-focal tensor estimation algorithms.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquiring 3D object models from specular motion using circular lights illumination","authors":"J. Zheng, A. Murata","doi":"10.1109/ICCV.1998.710854","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710854","url":null,"abstract":"This work recovers 3D graphics models of objects with specular surfaces. An object is rotated and continuous images of it are taken. Circular lights that generate cones of rays are used to illuminate the rotating object. When the lights are properly set each point on the object can be highlighted during the rotation. The shape for each rotational plane is measured independently using its corresponding epipolar plane image. A 3D graphics model is subsequently reconstructed by combining shapes at different rotation planes. Computing a shape is simple and requires only the motion of the highlight on each rotation plane. Results not obtained before are given in the 3D shape recovery experiments on real objects.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illumination-invariant color object recognition via compressed chromaticity histograms of color-channel-normalized images","authors":"M. S. Drew, Jie Wei, Ze-Nian Li","doi":"10.1109/ICCV.1998.710768","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710768","url":null,"abstract":"Several color object recognition methods that are based on image retrieval algorithms attempt to discount changes of illumination in order to increase performance when test image illumination conditions differ from those that obtained when the image database was created. Here we extend the seminal method of Swain and Ballard to discount changing illumination. The new method is based on the first stage of the simplest color indexing method, which uses angular invariants between color image and edge image channels. That method first normalizes image channels, and then effectively discards much of the remaining information. Here we adopt the color-normalization stage as an adequate color constancy step. Further, we replace 3D color histograms by 2D chromaticity histograms. Treating these as images, we implement the method in a compressed histogram-image domain using a combination of wavelet compression and Discrete Cosine Transform (DCT) to fully exploit the technique of low-pass filtering for efficiency. Results are very encouraging, with substantially better performance than other methods tested. The method is also fast, in that the indexing process is entirely carried out in the compressed domain and uses a feature vector of only 36 or 72 values.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123914757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Snake pedals: geometric models with physics-based control","authors":"B. Vemuri, Yanlin Guo","doi":"10.1109/ICCV.1998.710754","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710754","url":null,"abstract":"In this paper, we introduce a novel geometric shape modeling scheme which allows for representation, of global and local shape characteristics of an object. Geometric models are traditionally well suited for representing global shapes but not the local details. However, in this paper we propose a powerful geometric shape modeling scheme which allows for the representation of global shapes with local detail and permits model shaping as well as topological changes via physics-based control. The proposed modeling scheme consists of representing shapes by pedal curves and surfaces-pedal curves/surfaces are the loci of the foot of perpendiculars to the tangents of a fixed curve/surface from a fixed point called the pedal point. By varying the location of the pedal point, one can synthesize a large class of shapes which exhibit both local and global deformations. We introduce physics-based control for shaping these geometric models by letting the pedal point vary and use a dynamic spline to represent the position of this varying pedal point. The model dubbed as a \"snake pedal\" allows for interactive manipulation via forces applied to the snake. We demonstrate the applicability of this modeling scheme via examples of shape synthesis and shape estimation from real image data.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114010332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}