{"title":"Egomotion from global flow field data","authors":"V. Sundareswaran","doi":"10.1109/WVM.1991.212777","DOIUrl":"https://doi.org/10.1109/WVM.1991.212777","url":null,"abstract":"The author presents two independent, global algorithms to compute the motion parameters. The flow circulation algorithm fits a linear surface to the curl of the flow field. The parameters of this linear surface are proportional to the angular velocity components. The author shows that instead of the curl values one can use circulation values that are simply contour integrals of the flow field on the image plane. The FOE (focus of expansion) location algorithm computes a certain circular components of the flow field that is a quadratic polynomial only when the correct FOE is used in the computation of the circular component. The author uses this observation to provide a method to determine the location of the FOE. Experimental results are presented.<<ETX>>","PeriodicalId":208481,"journal":{"name":"Proceedings of the IEEE Workshop on Visual Motion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117350857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion analysis and modeling of epicardial surfaces from point and line correspondences","authors":"S. K. Mishra, D. Goldgof","doi":"10.1109/WVM.1991.212771","DOIUrl":"https://doi.org/10.1109/WVM.1991.212771","url":null,"abstract":"This paper presents a new algorithm for recovering motion parameters of nonrigid objects using both point and line correspondences. It requires estimating the coefficients of the first fundamental form before and after the motion. This algorithm has several advantages over a previously developed algorithm which uses only point correspondences and Gaussian curvature. First, it does not require any assumption on the spatial distribution of the stretching parameter in conformal motion. Second, the amount of computation is significantly reduced. The algorithm is tested on both simulated and real data and its performance is evaluated. In the second part, several issues related to nonrigid surface modeling and reconstruction from sparse data are discussed.<<ETX>>","PeriodicalId":208481,"journal":{"name":"Proceedings of the IEEE Workshop on Visual Motion","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123334197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast subspace algorithm for recovering rigid motion","authors":"Allan D. Jepson, D. J. Heeger","doi":"10.1109/WVM.1991.212779","DOIUrl":"https://doi.org/10.1109/WVM.1991.212779","url":null,"abstract":"The image motion field for an observer moving through a static environment depends on the observer's translational and rotational velocities along with the distances to surface points. Given such a motion field as input the authors present a new algorithm for computing the observer's motion and the depth structure of the scene. The approach is a further development of sub-space methods. This class of methods involve splitting the equations describing the motion field into separate equations for the observer's translational direction, the rotational velocity and the relative depths. The resulting equations can then be solved successively, beginning with the equations for the translational direction. The authors show how this first step can be simplified considerably. The consequence is that the observer's velocity and the relative depths to points in the scene can all be recovered by successively solving three linear problems.<<ETX>>","PeriodicalId":208481,"journal":{"name":"Proceedings of the IEEE Workshop on Visual Motion","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124084062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion-boundary illusions and their regularization","authors":"Y. Aloimonos, Liuqing Huang","doi":"10.1109/WVM.1991.212783","DOIUrl":"https://doi.org/10.1109/WVM.1991.212783","url":null,"abstract":"The authors performed several experiments where they made sure that the only cues available to the observer were contour and motion. It turns out that when humans combine information from contour and motion to reconstruct the shape of an object in view, if the results of the two modules-shape from contour and structure from motion-are inconsistent, they totally discard one of the cues and an illusion is experienced. The authors describe examples of such illusions and identify the conditions under which they happen. Finally, they introduce a computational theory for combining contour and motion using the theory of regularization. The theory explains such illusions and predicts many more.<<ETX>>","PeriodicalId":208481,"journal":{"name":"Proceedings of the IEEE Workshop on Visual Motion","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124683017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of velocity, acceleration and disparity in time sequences","authors":"H. Bårman, L. Haglund, H. Knutsson, G. Granlund","doi":"10.1109/WVM.1991.212789","DOIUrl":"https://doi.org/10.1109/WVM.1991.212789","url":null,"abstract":"The paper presents a general framework for the analysis of time sequences. Features extracted include speed, acceleration and disparity/depth. The method uses spatio-temporal filtering in a hierarchical structure. Synthetic and real world examples are included.<<ETX>>","PeriodicalId":208481,"journal":{"name":"Proceedings of the IEEE Workshop on Visual Motion","volume":"421 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122864758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}