{"title":"Toward recovering shape and motion of 3D curves from multi-view image sequences","authors":"R. Carceroni, Kiriakos N. Kutulakos","doi":"10.1109/CVPR.1999.786938","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786938","url":null,"abstract":"We introduce a framework for recovering the 3D shape and motion of unknown, arbitrarily-moving curves from two or more image sequences acquired simultaneously from distinct points in space. We use this framework to (1) identify ambiguities in the multi-view recovery of (rigid or nonrigid) 3D motion for arbitrary curves, and (2) identify a novel spatio-temporal constraint that couples the problems of 3D shape and 3D motion recovery in the multi-view case. We show that this constraint leads to a simple hypothesize-and-test algorithm for estimating 3D curve shape and motion simultaneously. Experiments performed with synthetic data suggest that, in addition to recovering 3D curve motion, our approach yields shape estimates of higher accuracy than those obtained when stereo analysis alone is applied to a multi-view sequence.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85863472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconstruction of linearly parameterized models from single images with a camera of unknown focal length","authors":"David Jelinek, C. J. Taylor","doi":"10.1109/CVPR.1999.784657","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784657","url":null,"abstract":"This paper deals with the problem of recovering the dimensions of an object and its pose from a single image acquired with a camera of unknown focal length. It is assumed that the object in question can be modeled as a polyhedron where the coordinates of the vertices can be expressed as a linear function of a dimension vector, /spl lambda/. The reconstruction program takes as input a set of correspondences between features in the model and features in the image. From this information the program determines an appropriate projection model for the camera (scaled orthographic or perspective), the dimensions of the object, its pose relative to the camera and, in the case of perspective projection, the focal length of the camera. We demonstrate that this reconstruction task can be framed as an unconstrained optimization problem involving a small number of variables, no more than four, regardless of the number of parameters in the dimension vector.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89102486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient recursive factorization method for determining structure from motion","authors":"Yanhua Li, M. Brooks","doi":"10.1109/CVPR.1999.786930","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786930","url":null,"abstract":"A recursive method is presented for recovering 3D object shape and camera motion under orthography from an extended sequence of video images. This may be viewed as a natural extension of both the original and the sequential factorization methods. A critical aspect of these factorization approaches is the estimation of the so-called shape space, and they may in part be characterized by the manner in which this subspace is computed. If P points are tracked through F frames, the recursive least-squares method proposed in this paper updates the shape space with complexity O(P) per frame. In contrast, the sequential factorization method updates the shape space with complexity O(P/sup 2/) per frame. The original factorization method is intended to be used in batch mode using points tracked across all available frames. It effectively computes the shape space with complexity O(FP/sup 2/) after F frames. Unlike other methods, the recursive approach does not require the estimation or updating of a large measurement or covariance matrix. Experiments with real and synthetic image sequences confirm the recursive method's low computational complexity and good performance, and indicate that it is well suited to real-time applications.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76021097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advances in daylight statistical colour modelling","authors":"D. Alexander","doi":"10.1109/CVPR.1999.786957","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786957","url":null,"abstract":"In this paper, parametric statistical modelling of distributions of colour camera data is discussed. A review is provided with some analysis of the properties of some common models, which are generally based on an assumption of independence of the chromaticity and intensity components of colour data. Results of an empirical comparison of the performance of various models are also reviewed. These results indicate that such models are not appropriate for situations other than highly controlled environments. In particular, they perform poorly for daylight imagery. Here, a modification to existing statistical colour models is proposed and the resultant new models are assessed using the same methodology as for the previous results. This simple modification, which is based on the inclusion of an ambient term in the underlying physical model, is shown to have a major impact on the performance of the models in less constrained daylight environments.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77603104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surface reconstruction from multiple aerial images in dense urban areas","authors":"M. Fradkin, M. Roux, H. Maître, U. Leloglu","doi":"10.1109/CVPR.1999.784639","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784639","url":null,"abstract":"Accurate 3D surface models of dense urban areas are essential for a variety of applications, such as cartography, urban planning and monitoring mobile communications, etc. Since manual surface reconstruction is very costly and time-consuming, the development of automated algorithms is of great importance. While most of existing algorithms focus on surface reconstruction either in rural or sub-urban areas, we present an approach dealing with dense urban scenes. The approach utilizes different image-derived cues, like multiview stereo and color information, as well as the general scene knowledge, formulated in data-driven reasoning and geometric constraints. Another important feature of our approach is simultaneous processing of 2D and 3D data. Our approach begins with two independent tasks: stereo reconstruction using multiple views and region-based image segmentation, resulting in generation disparity and segmentation maps, respectively. Then, the information derived from the both maps is utilized for generation of a dense elevation map, through robust verification of planar surface approximations for the detected regions and imposition of geometric constraints. The approach has been successfully tested on complex residential and industrial scenes.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91500286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multiple hypothesis approach to figure tracking","authors":"Tat-Jen Cham, James M. Rehg","doi":"10.1109/CVPR.1999.784636","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784636","url":null,"abstract":"This paper describes a probabilistic multiple-hypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolution of the probability density is achieved through sampling from the prior distribution, followed by local optimization of the sample positions to obtain updated modes. This method of generating hypotheses from state-space search does not require the use of discrete features unlike classical multiple-hypothesis tracking. The parametric form of the model is suited for high dimensional state-spaces which cannot be efficiently modeled using non-parametric approaches. Results are shown for tracking Fred Astaire in a movie dance sequence.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91347194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven shape-from-shading using curvature consistency","authors":"P. L. Worthington, E. Hancock","doi":"10.1109/CVPR.1999.786953","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786953","url":null,"abstract":"This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. Firstly, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard-constraint. This improves the data-closeness of the recovered needle-map. Secondly, we consider how topographic constraints can be lured to impose local consistency on the recovered needle-map. We present several alternative curvature consistency models, and provide an experimental assessment of the new shape-from-shading framework on both real-world images and synthetic images with known ground-truth surface-normals. The main conclusion drawn from our analysis is that the new framework allows rapid development of more appropriate constraints on the SFS problem.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87054479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deformable shape detection and description via model-based region grouping","authors":"S. Sclaroff, Lifeng Liu","doi":"10.1109/CVPR.1999.784603","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784603","url":null,"abstract":"A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85047888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial filter selection for illumination-invariant color texture discrimination","authors":"Bea Thai, Glenn Healey","doi":"10.1109/CVPR.1999.784623","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784623","url":null,"abstract":"Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90494267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On plane-based camera calibration: A general algorithm, singularities, applications","authors":"P. Sturm, S. Maybank","doi":"10.1109/CVPR.1999.786974","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786974","url":null,"abstract":"We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89320146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}