[1988 Proceedings] Second International Conference on Computer Vision最新文献

筛选
英文 中文
The Motion Coherence Theory 运动相干理论
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590011
A. Yuille, N. Grzywacz
{"title":"The Motion Coherence Theory","authors":"A. Yuille, N. Grzywacz","doi":"10.1109/CCV.1988.590011","DOIUrl":"https://doi.org/10.1109/CCV.1988.590011","url":null,"abstract":"Tliere are a number of important phenom- ena in motion perception involving colicrcnce. Examples include motion capture and motion cooperativity. We propose a theoretical model, called the motion coherence tlieory, that gives a possible explanation for these effects (Yuille and Grzywacz, 1988a,b). In this framework, the aperture problem can also be thought of as a problem of coherence and given a similar explanation. We propose the concept of a velocity field dcfined everywhere in the image, even where there is no explicit motion information available. Through a cost function, tlie model imposes smoothness on the velocity field in a more general way than previous theories. In this paper, we provide a de- tailed theoretical analysis of the motion coherence theory. We discuss its relations with previous theories and show that some of t1ic.m arc approximations to it. A sccorid pa- per (Grzywacz, Smith, and Yuillc, 1088) provides exten- sions and cletnilcd comparisons to psychophysical plienom- cna. Tlic theory applies to both short-range and long- range motion. It places them in the same computational framework aiid provides a way to define interactions be- twcr:11 the two 1)roccsses.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125227695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
The Creation Of Structure In Dynamic Shape 动态形态结构的创造
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590051
A. Yuille
{"title":"The Creation Of Structure In Dynamic Shape","authors":"A. Yuille","doi":"10.1109/CCV.1988.590051","DOIUrl":"https://doi.org/10.1109/CCV.1988.590051","url":null,"abstract":"","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"25 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The Feasibility Of Motion And Structure Computations 运动和结构计算的可行性
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590046
J. Barron, A. Jepson, John K. Tsotsos
{"title":"The Feasibility Of Motion And Structure Computations","authors":"J. Barron, A. Jepson, John K. Tsotsos","doi":"10.1109/CCV.1988.590046","DOIUrl":"https://doi.org/10.1109/CCV.1988.590046","url":null,"abstract":"We address the problem of interpreting image velocity fields generated by a moving monocular observer viewing a stationary environment under perspective projection to obtain 3-D information about the relative motion of the observer (egomotion) and the relative depth of environmental surface points (environmental layout). The algorithm presented in this paper involves computing motion and structure from a spatio-temporal distribution of image velocities that are hypothesized to belong to the same 3-D planar surface. However, the main result of this paper is not just another motion and structure algorithm that exhibits some novel features but rather an extensive error analysis of the algorithm’s preformance for various types of noise in the image velocities. Waxman and Ullman [83] have devised an algorithm for computing motion and structure using image velocity and its 1st and 2d order spatial derivatives at one image point. We generalize this result to include derivative information in time as well. Further, we show the approximate equivalence of reconstruction algorithms that use only image velocities and those that use one image velocity and its 1st and/or 2”d spatio-temporal derivatives at one image point. The main question addressed in this paper is: “How accurate do the input image velocities have to be?’ or equivalently, “How accurate does the input image velocity and its I~ and 2& order derivatives have to be?“. The answer to this question involves worst case error analysis. We end the paper by drawing some conclusions about the feasibility of motion and structure calculations in general. I.1 Introduction In this paper, we present a algorithm for computing the motion and strncture parameters that describe egomotion and environmental layout from image velocity fields generated by a moving monocular observer viewing a stationary environment. Egomotion is defined as the motion of the observer relative to his environment and can be described by 6 parameters; 3 dvth-scaled translational parameters, Z and 3 rotation parameters, o. Environmental layout refers to the 3-D shape and location of objects in the environment. For monocular image sequences, en$ronmental layout is described by the normalized surface gradient, a, at each image point. To determine these motion and structure parameters we derive nonlinear equations relating image velocity at some image int ?(?*,t ‘) to the underlying motion and structure parameters at (P,c). The computaP tion of egomotion and environmental layout from image velocity is sometimes called the reconstruction problem; we reconstruct the observer’s motion, and the layout of his environment, from (timevarying) image velocity. A lot of research has been devoted to","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards The Automatic Generation Of Recognition Strategies 识别策略的自动生成研究
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590000
C. Hansen, T. Henderson
{"title":"Towards The Automatic Generation Of Recognition Strategies","authors":"C. Hansen, T. Henderson","doi":"10.1109/CCV.1988.590000","DOIUrl":"https://doi.org/10.1109/CCV.1988.590000","url":null,"abstract":"This paper describes a method for the automatic generation of recognition strategies. This is accomplished using a technique developed for quantifying the following properties of 3-D features which compose models used in 3-D computer vision: robustness, completeness, consistency, cost, and uniqueness. By utilizing this inforniation, the automatic synthesis of a specialized recognition scheme, called a Strategy Tree, is accomplished. Strategy Trees describe, in a systematic and robust manner, the search process used for recognition and localization of particular objects in the given scene. System constraints are satisfied which lead to a set of features which guide the recognition process. Each feature has a Corroborating Evidence Subtrees which validate the initial hypothesis. Verification techniques, used to substantiate or refute these hypotheses, are explored. Experiments are presented.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"60 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Optimal Corner Detector 最佳边角检测器
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.589975
K. Rangarajan, M. Shah, D. V. Brackle
{"title":"Optimal Corner Detector","authors":"K. Rangarajan, M. Shah, D. V. Brackle","doi":"10.1109/CCV.1988.589975","DOIUrl":"https://doi.org/10.1109/CCV.1988.589975","url":null,"abstract":"A corner is defined as the junction point of two or more straight line edges. Corners are special features in a image. They are of great use in computing the optical flow and structure from motion. In this paper, we report an optimal corner detector which uses a mathematical model for a corner. An optimal gray tone corner detector is derived for a restricted case of corners, i.e. corners made by lines which are symmetric about a horizontal axis. The resultant corner detector is described by product of sine in x and exponential in y direction in a portion of the mask and by the product of two sines in x and y directions in the remaining portion of it. It is then generalized to include any corner of an arbitrary angle and orientation. This results in an approximation of all corners by a total of twelve major types. It is observed that all the twelve masks can actually be configured with four smaller sub-masks, and this results in a significant reduction in the computetions. The computations are further reduced by using the the separability of masks. Results for synthetic and real scenes are reported.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122581629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 122
How the Delaunay Triangulation Can Be used For Representing Stereo Data 如何使用Delaunay三角剖分来表示立体数据
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.589971
Elisabeth Le Bras-Mehlman, M. Schmitt, O. Faugeras, J. Boissonnat
{"title":"How the Delaunay Triangulation Can Be used For Representing Stereo Data","authors":"Elisabeth Le Bras-Mehlman, M. Schmitt, O. Faugeras, J. Boissonnat","doi":"10.1109/CCV.1988.589971","DOIUrl":"https://doi.org/10.1109/CCV.1988.589971","url":null,"abstract":"This article proposes a coherent method of interpolating 3D data obtained for example by stereo, with a polyhedral surface by means of the Delaunay Triangulation. We first give some theoretical properties concerning the approximations of sampled objects we obtain when the sampling rate increases, based on the study of their skeleton using some tools of Mathematical Morphology. Then, we present the algorithms and their complexity analysis which yield both a surface representation of objects and a volume representation of free space which may be useful in Robotics. This goal is achieved by means of a simple visibility criterion. The method is intrinsically discontinuity preserving and can be used for the integration of multiple viewpoints.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125444992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Novel Approach To Colour Constancy 色彩恒常性的新方法
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.589967
D. Forsyth
{"title":"A Novel Approach To Colour Constancy","authors":"D. Forsyth","doi":"10.1109/CCV.1988.589967","DOIUrl":"https://doi.org/10.1109/CCV.1988.589967","url":null,"abstract":"By approaching colour constancy as a problem of predicting colour appearance, we derive the colour constancy equation, which we use to enumerate those properties of illuminant and surface reflectance required for colour constancy. We then use a physical realisability constraint on surface reflectances to construct the set of illuminants under which the image observed can have arisen. Two distinct algorithms arise from employing this constraint in conjunction with the colour constancy equation: the first corresponds to normalisation according to a coefficient rule, the second is considerably more complex, and allows a large number of parameters in the illuminant to be recovered. The simpler algorithm has been tested extensively on images of real Mondriaan’s, taken under different coloured lights and displays good constancy. The results also indicate that good constancy requires that receptoral gain be controlled.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122279942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Matching Perspective Images Using Geometric Constraints And Perceptual Grouping 基于几何约束和感知分组的透视图像匹配
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590050
Long Quan, R. Mohr
{"title":"Matching Perspective Images Using Geometric Constraints And Perceptual Grouping","authors":"Long Quan, R. Mohr","doi":"10.1109/CCV.1988.590050","DOIUrl":"https://doi.org/10.1109/CCV.1988.590050","url":null,"abstract":"This paper presents a method for matching two perspective images from a well defined indoor scene. The method insists on exploring both geometric constraints and perceptual grouping for reducing the search space. The geometric constraints used are principally perspective information, such as vanishing point, horizon line and projective coordinates. The perceptual groups are such as directional, rays and collinear groups. Our method succeeds in analyzing successively the rotational and translational effect. which makes geometric constraints more direct and easier to explore. The matching strategy is coarse-to-fine, based on the hierarchical perceptual grouping. The hypotheses are then propagated and verified to the whole image by neighboring, relationship without backtracking. The matching results are also presented.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132627961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Analysis Of A Sequence Of Stereo Scenes Containing Multiple Moving Objects Using Rigidity Constraints 基于刚性约束的包含多个运动物体的立体场景序列分析
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.589989
Zhengyou Zhang, O. Faugeras, N. Ayache
{"title":"Analysis Of A Sequence Of Stereo Scenes Containing Multiple Moving Objects Using Rigidity Constraints","authors":"Zhengyou Zhang, O. Faugeras, N. Ayache","doi":"10.1109/CCV.1988.589989","DOIUrl":"https://doi.org/10.1109/CCV.1988.589989","url":null,"abstract":"Iri this paper, we describe a method for comput.ing the rriovc~rrient. of objects as well as that of a mobile robot from ii scqiicrice of stereo frames. Stereo frames are obtained at ~iifl’(~reiit, instants by a stereo rig, when the mobile robot rIilvigat(>s in an unknown environment possibly containing ~)iiit: rrioving rigid objects. An approach based on rigidity ( oiihlraiiit,s is presented for registering two stereo frames. Wcs dernoristrate how the uncertainty of measurements can IN^ integrated with the formalism of the rigidity constraints. A iiew technique is described to match very noisy segments. ‘1‘11~ iiifluence of egomotion on observed movement:; of ob,j(~ 1,s is discussed in detail. Egomotion is fir:jt determined itiid then eliminated before determination of the motion of o1)jt:cl.s. The proposed algorithm is completely automatic. I~:xperirnerital results are provided. Some remarks conclude ths paper.","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131185364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Occlusion-sensitive Matchiing Occlusion-sensitive Matchiing
[1988 Proceedings] Second International Conference on Computer Vision Pub Date : 1988-12-05 DOI: 10.1109/CCV.1988.590002
W. Thompson, Rand P. Whillock
{"title":"Occlusion-sensitive Matchiing","authors":"W. Thompson, Rand P. Whillock","doi":"10.1109/CCV.1988.590002","DOIUrl":"https://doi.org/10.1109/CCV.1988.590002","url":null,"abstract":"","PeriodicalId":229545,"journal":{"name":"[1988 Proceedings] Second International Conference on Computer Vision","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128151471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信