Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)最新文献

筛选
英文 中文
Hyperbolic "Smoothing" of shapes 形状的双曲“平滑”
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710721
Kaleem Siddiqi, A. Tannenbaum, S. Zucker
{"title":"Hyperbolic \"Smoothing\" of shapes","authors":"Kaleem Siddiqi, A. Tannenbaum, S. Zucker","doi":"10.1109/ICCV.1998.710721","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710721","url":null,"abstract":"We have been developing a theory of generic 2-D shape based on a reaction-diffusion model from mathematical physics. The description of a shape is derived from the singularities of a curve evolution process driven by the reaction (hyperbolic) term. The diffusion (parabolic) term is related to smoothing and shape simplification. However, the unification of the two is problematic, because the slightest amount of diffusion dominates and prevents the formation of generic first-order shocks. The technical issue is whether it is possible to smooth a shape, in any sense, without destroying the shocks. We now report a constructive solution to this problem, by embedding the smoothing term in a global metric against which a purely hyperbolic evolution is performed from the initial curve. This is a new flow for shape, that extends the advantages of the original one. Specific metrics are developed, which lead to a natural hierarchy of shape features, analogous to the simplification one might perceive when viewing an object from increasing distances. We illustrate our new flow with a variety of examples.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131439868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A cooperative framework for segmentation using 2D active contours and 3D hybrid models as applied to branching cylindrical structures 一种基于二维活动轮廓和三维混合模型的分支圆柱结构分割协同框架
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710758
Thomas O'Donnell, M. Jolly, Alok Gupta
{"title":"A cooperative framework for segmentation using 2D active contours and 3D hybrid models as applied to branching cylindrical structures","authors":"Thomas O'Donnell, M. Jolly, Alok Gupta","doi":"10.1109/ICCV.1998.710758","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710758","url":null,"abstract":"Hybrid models are powerful tools for recovery in that they simultaneously provide a gross parametric as well as a detailed description of an object. However, it is difficult to directly employ hybrid models in the segmentation process since they are not guaranteed to locate the optimal boundaries in cross-sectional slices. Propagating 2D active contours from slice to slice, on the other hand, to delineate an object's boundaries is often effective, but may run into problems when the object's topology changes, such as at bifurcations or even in areas of high curvature. Here, we present a cooperative framework to exploit the positive aspects of both 3D hybrid model and 2D active contour approaches for segmentation and recovery. In this framework the user-defined parametric component of a 3D hybrid model provides constraints for a set of 2D segmentations performed by active contours. The same hybrid model is then fit both parametrically and locally to this segmentation. For the hybrid model fit we employ several new variations on the physically-motivated paradigm which seek to speed recovery while guaranteeing stability. A by-product of these variations is an increased generality of the method via the elimination, of some of its ad hoc parameters. We apply our cooperative framework to the recovery of branching cylindrical structures from 3D image volumes. The hybrid model we employ has a novel parametric component which is a fusion of individual cylinders. These cylinders have spines that are arbitrary space curves and cross-sections which may be any star shaped planar curve.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A probabilistic contour discriminant for object localisation 一种用于目标定位的概率轮廓判别法
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710748
J. MacCormick, A. Blake
{"title":"A probabilistic contour discriminant for object localisation","authors":"J. MacCormick, A. Blake","doi":"10.1109/ICCV.1998.710748","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710748","url":null,"abstract":"A method of localising objects in images is proposed. Possible configurations are evaluated using the contour discriminant, a likelihood ratio which is derived from a probabilistic model of the feature detection process. We treat each step in this process probabilistically, including the occurrence of clutter features, and derive the observation densities for both correct \"target\" configurations and incorrect \"clutter\" configurations. The contour discriminant distinguishes target objects from the background even in heavy clutter, making only the most general assumptions about the form that clutter might take. The method generates samples stochastically to avoid the cost of processing an entire image, and promises to be particularly suited to the task of initialising contour trackers based on sampling methods.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
3D reconstruction with projective octrees and epipolar geometry 三维重建与射影八叉树和极几何
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710849
B. Garcia, P. Brunet
{"title":"3D reconstruction with projective octrees and epipolar geometry","authors":"B. Garcia, P. Brunet","doi":"10.1109/ICCV.1998.710849","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710849","url":null,"abstract":"In this paper, the problem of generating a 3D octree-like structure with the help of epipolar geometry within a projective framework is addressed. After a brief introduction on the basics of octrees and epipolar geometry, the new concept called \"projective octree\" is introduced together with an algorithm for building this projective structure. Finally, some results of the implementations are presented in the last section together with the conclusions and future work.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Object tracking using deformable templates 使用可变形模板进行对象跟踪
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710756
Yu Zhong, Anil K. Jain, M. Jolly
{"title":"Object tracking using deformable templates","authors":"Yu Zhong, Anil K. Jain, M. Jolly","doi":"10.1109/ICCV.1998.710756","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710756","url":null,"abstract":"We propose a novel method for object tracking using prototype-based deformable template models. To track an object in an image sequence, we use a criterion which combines two terms: the deviation of the object shape from its shape in the previous frame, and the fidelity of the detected shape to the input image. Shape and gradient information are used to track the object. We have also used the consistency between corresponding object regions throughout the sequence to help in trading the object of interest. Inter-frame motion is also used to track the boundary of moving objects. We have applied the algorithm to a number of image sequences from different sources. The inherent structure in the deformable template, together with region, motion, and image gradient cues, make the algorithm relatively insensitive to the adverse effects of weak image features and moderate partial occlusion.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127205585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 197
Recognizing novel 3-D objects under new illumination and viewing position using a small number of example views or even a single view 使用少量示例视图甚至单个视图识别在新的照明和观看位置下的新三维物体
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710713
E. Sali, S. Ullman
{"title":"Recognizing novel 3-D objects under new illumination and viewing position using a small number of example views or even a single view","authors":"E. Sali, S. Ullman","doi":"10.1109/ICCV.1998.710713","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710713","url":null,"abstract":"A method is presented for class-based recognition using a small number of example views taken under several different viewing conditions. The main emphasis is on using a small number of examples. Previous work assumed that the set of examples is sufficient to span the entire space of possible objects, and that in generalizing to a new viewing conditions a sufficient number of previous examples under the new conditions will be available to the recognition system. Here we have considerably relaxed these assumptions and consequently obtained good class-based generalization from a small number of examples, even a single example view, for both viewing position and illumination changes. In addition, previous class-based approaches only focused on viewing position changes and did not deal with illumination changes. Here we used a class-based approach that can generalize for both illumination and viewing position changes. The method was applied to face and car model images. New views under viewing position and illumination changes were synthesized from a small number of examples.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123853328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
State space construction for behavior acquisition in multi agent environments with vision and action 具有视觉和动作的多智能体环境中行为获取的状态空间构建
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710819
E. Uchibe, M. Asada, K. Hosoda
{"title":"State space construction for behavior acquisition in multi agent environments with vision and action","authors":"E. Uchibe, M. Asada, K. Hosoda","doi":"10.1109/ICCV.1998.710819","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710819","url":null,"abstract":"This paper proposes a method which estimates the relationships between learner's behaviors and other agents' ones in the environment through interactions (observation and action) using the method of system identification. In order to identify the model of each agent, Akaike's Information Criterion is applied to the results of Canonical Variate Analysis for the relationship between the observed data in terms of action and future observation. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior. The proposed method is applied to a soccer playing situation, where a rolling ball and other moving agents are well modeled and the learner's behaviors are successfully acquired by the method. Computer simulations and real experiments are shown and a discussion is given.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A chromaticity space for specularity, illumination color- and illumination pose-invariant 3-D object recognition 一种用于反射率、光照颜色和光照姿态不变的三维物体识别的色度空间
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710714
Daniel Berwick, S. W. Lee
{"title":"A chromaticity space for specularity, illumination color- and illumination pose-invariant 3-D object recognition","authors":"Daniel Berwick, S. W. Lee","doi":"10.1109/ICCV.1998.710714","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710714","url":null,"abstract":"Most of the recent color recognition/indexing approaches concentrate on establishing invariance to illumination color to improve the utility of color recognition. However, other effects caused by illumination pose and specularity on three-dimensional object surfaces have not received notable attention. We present a chromaticity recognition method that discounts the effects of illumination pose, illumination color and specularity. It utilizes a chromaticity space based on log-ratio of sensor responses for illumination pose and color invariance. A model-based specularity detection/rejection algorithm can be used to improve the chromaticity recognition and illumination estimation for objects including specular reflections.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123390771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Euclidean structure from uncalibrated images using fuzzy domain knowledge: application to facial images synthesis 利用模糊领域知识从未校准图像中提取欧几里德结构:在人脸图像合成中的应用
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710807
Zhengyou Zhang, K. Isono, S. Akamatsu
{"title":"Euclidean structure from uncalibrated images using fuzzy domain knowledge: application to facial images synthesis","authors":"Zhengyou Zhang, K. Isono, S. Akamatsu","doi":"10.1109/ICCV.1998.710807","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710807","url":null,"abstract":"Use of uncalibrated images has found many applications such as image synthesis. However, it is not easy to specify the desired position of the new image in projective or affine space. This paper proposes to recover Euclidean structure from uncalibrated images using domain knowledge such as distances and angles. The knowledge we have is usually about an object category, but not very precise for the particular object being considered. The variation (fuzziness) is modeled as a Gaussian variable. Six types of common knowledge are formulated. Once we have an Euclidean description, the task to specify the desired position in Euclidean space becomes trivial. The proposed technique is then applied to synthesis of new facial images. A number of difficulties existing in image synthesis are identified and solved. For example, we propose to use edge points to deal with occlusion.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Agent orientated annotation in model based visual surveillance 基于模型的视觉监控中面向Agent的标注
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710817
Paolo Remagnino, T. Tan, K. Baker
{"title":"Agent orientated annotation in model based visual surveillance","authors":"Paolo Remagnino, T. Tan, K. Baker","doi":"10.1109/ICCV.1998.710817","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710817","url":null,"abstract":"The paper presents an agent based surveillance system for use in monitoring scenes involving both pedestrians and vehicles. The system supplies textual descriptions for the dynamic activity occurring in the 3D world. These are derived by means of dynamic and probabilistic inference based on geometric information provided by a vision system that tracks vehicles and pedestrians. The symbolic scene annotation is given at two major levels of description: the object level and the inter-object level. At object level, each tracked pedestrian or vehicle is assigned a behaviour agent which uses a Bayesian network to infer the fundamental features of the objects' trajectory, and continuously updates its textual description. The inter-object interaction level is interpreted by a situation agent which is created dynamically when two objects are in close proximity. In the work included here the situation agent can describe a two-object interaction in terms of basic textual annotations, to summarise the dynamics of the local action.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125585724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信