{"title":"Mathematical properties of the 2-D motion field: from singular points to motion parameters","authors":"A. Verri, F. Girosi, V. Torre","doi":"10.1109/WVM.1989.47109","DOIUrl":"https://doi.org/10.1109/WVM.1989.47109","url":null,"abstract":"The authors study the mathematical properties of the 2-D motion field which are useful for motion understanding. It has been shown that the location and the nature of singular points of the motion field carry most of the relevant information on 3-D motion. Moreover, since the singular points of the motion field are usually structurally stable, the extraction of 3-D motion information from them is robust against noise and small perturbations. The practical relevance of the proposed approach is justified by the observation that it is possible to obtain from a time-varying sequence of images a 2-D vector field which is very close to the true motion field. As a consequence the recovery of 3-D motion from structurally stable properties of the optical flow is feasible and provides good experimental results. Therefore, the whole framework seems very appropriate for computer vision.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126467601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interpretation of image sequences by spatio-temporal analysis","authors":"S. Peng, G. Medioni","doi":"10.1109/WVM.1989.47128","DOIUrl":"https://doi.org/10.1109/WVM.1989.47128","url":null,"abstract":"The authors present a system designed to analyze any sequence of closely sampled image frames. They describe recent experiments conducted in a vision laboratory using spatial and temporal information. The importance of temporal information is illustrated by experiments on random dot images and a method to use this information by a best-first-search in the temporal domain is suggested. A more elegant algorithm for extracting motion information in more realistic images is then presented. Results on both synthetic and real image sequences are shown. The advantages and limitations of both approaches are highlighted. It is concluded that, in contrast to most previous approaches, the proposed system should be quite robust, as it is capable of handling occlusion and disocclusion, which are explicitly modeled.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recovering 3-D motion parameters from image sequences with gross errors","authors":"C.-N. Lee, R. Haralick, X. Zhuang","doi":"10.1109/WVM.1989.47093","DOIUrl":"https://doi.org/10.1109/WVM.1989.47093","url":null,"abstract":"A robust algorithm to estimate 3-D motion parameters from a sequence of extremely noisy images is developed. The noise model includes correspondence mismatch errors, outliers, uniform noise, and Gaussian noise. More than 100000 controlled experiments were performed. The experimental results show that the error in the estimated 3-D parameters of the linear algorithm almost increases linearly with fraction of outliers. However, the increase for the robust algorithm is much slower, indicating its better performance and stability with data having blunders. The robust algorithm can detect the outliers, mismatching errors and blunders up to 30% of observed data. Therefore, it can be an effective tool in estimating 3-D motion parameters from multiframe time sequence imagery.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127965135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Burt, J. Bergen, R. Hingorani, R. Kolczynski, W.A. Lee, A. Leung, J. Lubin, H. Shvayster
{"title":"Object tracking with a moving camera","authors":"P. Burt, J. Bergen, R. Hingorani, R. Kolczynski, W.A. Lee, A. Leung, J. Lubin, H. Shvayster","doi":"10.1109/WVM.1989.47088","DOIUrl":"https://doi.org/10.1109/WVM.1989.47088","url":null,"abstract":"The authors describe the implementation of the local and focal levels of a dynamic-motion-analysis framework. Dynamic motion analysis achieves efficiency through sequential decomposition of a complex analysis task into simpler tasks, by 'peeling off complexity', and by directing analysis to portions of a scene that are most critical to the vision task. The authors describe four basic techniques for implementing dynamic analysis: foveation, two-stage motion computation, tracking, and one-component-at-a-time segmentation. Each process entails several iterations of a basic operation but convergence is fast and the computations themselves can be relatively crude. By way of illustration, the dynamic motion analysis technique was applied to a number of image sequences. Particular attention is given to an actual video sequence of a helicopter flying over a terrain. The sequence was obtained from a camera moving relative to the helicopter. It is concluded that the dynamic approach to motion analysis holds the promise of performing real-time processing to obtain precise, robust results, using practical hardware.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"369 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A parallel motion algorithm consistent with psychophysics and physiology","authors":"H. Bulthoff, J. Little, T. Poggio","doi":"10.1109/WVM.1989.47106","DOIUrl":"https://doi.org/10.1109/WVM.1989.47106","url":null,"abstract":"The authors describe a simple, parallel algorithm that successfully computes an optical flow from sequences of real images, is consistent with human psychophysics, and suggests a plausible physiological model. Regularizing optical flow computation leads to a formulation which minimizes matching error and, at the same time, maximizes smoothness of the optical flow. The authors develop an approximation to the full regularization computation in which corresponding points are found by comparing local patches of images. Selection among competing matches is performed using a winner-take-all scheme. The algorithm is independent of the types of features used for matching. Experiments with natural images show that the scheme is effective and robust against noise. The algorithm shows several of the same 'illusions' that humans perceive. A natural physiological implementation of the model is consistent with data from cortical areas V1 and MT.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115115054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using motion from orthographic projections to prune 3-D point matches","authors":"Homer H. Chen, T. S. Huang","doi":"10.1109/WVM.1989.47121","DOIUrl":"https://doi.org/10.1109/WVM.1989.47121","url":null,"abstract":"The authors presented a new method using motion constraint to prune 3-D point matches. The method discards noisy z coordinates of points and uses only x and y coordinates to compute motion. This method is specifically designed to resolve the matching ambiguity due to the use of large error tolerances in rigidity tests to account for depth errors. A least-squares solution of the motion problem is derived. Results show that this method is able to detect false matches more effectively than the traditional method which uses full 3-D coordinates.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"66 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121005067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of a nonlinear approach to the motion correspondence problem","authors":"S. Fogel","doi":"10.1109/WVM.1989.47098","DOIUrl":"https://doi.org/10.1109/WVM.1989.47098","url":null,"abstract":"Changes in successive images from a time-varying image sequence of a scene can be characterized by velocity vector fields. The estimate of the velocity vector field is determined as a compromise to satisfy two sets of constraints in addition to the regularization constraints: the optical flow constraints, which relate the values of time-varying image function at corresponding points of the successive images of the sequence, and the directional smoothness constraints, which relate the values of the neighboring velocity vectors. To achieve such a compromise, the author introduces the system of nonlinear equations of the unknown estimate of the velocity vector field. A state iterative method for solving this system is developed. The optical flow and smoothness constraints are selectively suppressed in the neighborhoods of the occlusion boundaries. The last is accomplished by attaching a weight to each constraint. The spatial variations in the values of the successive images of the sequence, with the correspondence specified by a current estimate of the velocity vector field, and variations in the current estimate of the velocity vectors themselves are implicitly used to adjust the weight function.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"526 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123205922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiments and uniqueness results on object structure and kinematics from a sequence of monocular images","authors":"T. Broida, R. Chellappa","doi":"10.1109/WVM.1989.47090","DOIUrl":"https://doi.org/10.1109/WVM.1989.47090","url":null,"abstract":"The authors consider the problem of using a sequence of monocular images (central projections) of a three-dimensional (3-D) moving object to estimate both its structure and kinematics. The object is assumed to be rigid, and its motion is assumed to be 'smooth'. A set of object match points is assumed to be available, consisting of fixed features on the object, the image-plane coordinates of which have been extracted from successive images in the sequence. The measured data are the noisy image plane coordinates of this set of object match points, taken from each image in the sequence. Results of an experiment with real imagery are presented, involving estimation of 28 unknown translational, rotational, and structural parameters, based on 12 images with seven feature points. Uniqueness results are summarized for the case of purely translational motion. A test based on a singular-value decomposition is described that determines whether or not noise-free data from an image sequence uniquely determines the elements of any given parameter vector, and sample uniqueness results are given. It is concluded that the experimental and the uniqueness results presented demonstrate the feasibility of the proposed approach.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the tracking of featureless objects with occlusion","authors":"G. Gordon","doi":"10.1109/WVM.1989.47089","DOIUrl":"https://doi.org/10.1109/WVM.1989.47089","url":null,"abstract":"The author discusses an extremely efficient low-level tracking algorithm using only centers of gravity and sizes of the objects. He uses the center of gravity as the main tracking mechanism and the size of the object to aid in solving occlusion problems. Thus the size of the objects is mainly used as an aid in deciding how and if the number of objects changes from one frame to another. The author reports on experiments using video images of tennis balls bouncing through the field of vision. The camera is fixed and the lighting conditions are controlled. The balls are easily extracted from the background by subtraction of the image from a registered image of the background under the same lighting conditions. It is concluded that the proposed techniques can be applied to both video and infrared images, and are especially useful when the objects lack significant 'features' for matching.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130780738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A common theoretical framework for visual motion's spatial and temporal coherence","authors":"N. Grzywacz, J. A. Smith, Alan L. Yuille","doi":"10.1109/WVM.1989.47104","DOIUrl":"https://doi.org/10.1109/WVM.1989.47104","url":null,"abstract":"A recently proposed computational theory for the perception of spatially coherent visual motion (see A.L. Yuille et al., 1988) is extended to include temporal coherence . Particularly, it is argued that a good extension is to postulate that the direction of visual motion does not change much in time. The extended theory is consistent with the psychophysical phenomena of motion capture, motion cooperativity, nonrigid wave motion, and motion inertia. The authors also discuss the possible roles of coherence for motion perception. It is shown that spatial coherence improves the signal-to-noise ratio of the perceived motion, solves the aperture problem, and simplifies the correspondence problem. Temporal coherence also simplifies the correspondence problem and speeds up its solution.<<ETX>>","PeriodicalId":342419,"journal":{"name":"[1989] Proceedings. Workshop on Visual Motion","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121887729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}