Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001最新文献

筛选
英文 中文
Learning inhomogeneous Gibbs model of faces by minimax entropy 利用极大极小熵学习非齐次Gibbs人脸模型
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937530
Ce Liu, Song-Chun Zhu, H. Shum
{"title":"Learning inhomogeneous Gibbs model of faces by minimax entropy","authors":"Ce Liu, Song-Chun Zhu, H. Shum","doi":"10.1109/ICCV.2001.937530","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937530","url":null,"abstract":"In this paper we propose a novel inhomogeneous Gibbs model by the minimax entropy principle, and apply it to face modeling. The maximum entropy principle generalizes the statistical properties of the observed samples and results in the Gibbs distribution, while the minimum entropy principle makes the learnt distribution close to the observed one. To capture the fine details of a face, an inhomogeneous Gibbs model is derived to learn the local statistics of facial feature paints. To alleviate the high dimensionality problem of face models, we propose to learn the distribution in a subspace reduced by principal component analysis or PCA. We demonstrate that our model effectively captures important and subtle non-Gaussian face patterns and efficiently generates good face models.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126505247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Real-time automated concurrent visual tracking of many animals and subsequent behavioural compilation 许多动物的实时自动并发视觉跟踪和随后的行为汇编
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937713
J. Zelek, D. Bullock
{"title":"Real-time automated concurrent visual tracking of many animals and subsequent behavioural compilation","authors":"J. Zelek, D. Bullock","doi":"10.1109/ICCV.2001.937713","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937713","url":null,"abstract":"One of our major research focus areas is real-time visual tracking and monitoring of moving and static objects in a video sequence. In particular we are interested in (1) object localization (also referred to as the focus of attention) which involves identifying the object of interest, (2) tracking the object using a model of the object which was initiated in step 1, and (3) understanding the accumulation of movements of the object over time (i.e., behavior). The objects of interiest for the purpose of this proposal are pigs. Automatically monitoring pigs via a non-invasively placed camera in their pens is interesting because the pigs are monitored in their natural habitat. Visual tracking involves modelling the object of interest and keeping track of its position and orientation through time. Issues include tracker recovery from error and preventing the tracker from jumping to other pigs. we have been able to demonstrate tracking pigs at about 10-15 Hz, however, the tracker tends to drift off the target eventually. We have only experimented with a single pig but our initial tests indicate that we can probably track at least 10 pigs simultaneously. Some unknowns include determining how quickly the pigs move and the type of motions, including quick jerky movements. Our preliminary investigations revealed that a blob tracker is insufficient for producing accurate traces.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126576268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A curve evolution approach for image segmentation using adaptive flows 一种基于自适应流的图像分割曲线演化方法
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937666
Haihua Feng, D. Castañón, W. C. Karl
{"title":"A curve evolution approach for image segmentation using adaptive flows","authors":"Haihua Feng, D. Castañón, W. C. Karl","doi":"10.1109/ICCV.2001.937666","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937666","url":null,"abstract":"In this paper, we develop a new active contour model for image segmentation using adaptive flows. This active contour model can be derived from minimizing a limiting form of the Mumford-Shah functional, where the segmented image is assumed to consist of piecewise constant regions. This paper is an extension of an active contour model developed by Chan-Vese. The segmentation method proposed in this paper adaptively estimates mean intensities for each separated region and uses a single curve to capture multiple regions with different intensities. The class of imagery that our new active model can handle is greater than the bimodal images. In particular, our method segments images with an arbitrary number of intensity levels and separated regions while avoiding the complexity of solving a full Mumford-Shah problem. The adaptive flow developed in this paper is easily formulated and solved using level set methods. We illustrate the performance of our segmentation methods on images generated by different modalities.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131511253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Viewpoint invariant texture matching and wide baseline stereo 视点不变纹理匹配和宽基线立体
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937686
F. Schaffalitzky, Andrew Zisserman
{"title":"Viewpoint invariant texture matching and wide baseline stereo","authors":"F. Schaffalitzky, Andrew Zisserman","doi":"10.1109/ICCV.2001.937686","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937686","url":null,"abstract":"We describe and demonstrate a texture region descriptor which is invariant to affine geometric and photometric transformations, and insensitive to the shape of the texture region. It is applicable to texture patches which are locally planar and have stationary statistics. The novelty of the descriptor is that it is based on statistics aggregated over the region, resulting in richer and more stable descriptors than those computed at a point. Two texture matching applications of this descriptor are demonstrated: (1) it is used to automatically identify, regions of the same type of texture, but with varying surface pose, within a single image; (2) it is used to support wide baseline stereo, i.e. to enable the automatic computation of the epipolar geometry between two images acquired from quite separated viewpoints. Results are presented on several sets of real images.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125661192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 226
On the complexity of probabilistic image retrieval 论概率图像检索的复杂性
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937653
N. Vasconcelos
{"title":"On the complexity of probabilistic image retrieval","authors":"N. Vasconcelos","doi":"10.1109/ICCV.2001.937653","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937653","url":null,"abstract":"Probabilistic image retrieval approaches can lead to significant gains over standard retrieval techniques. However, this occurs at the cost of a significant increase in computational complexity. In fact, closed-form solutions for probabilistic retrieval are currently available only for simple representations such as the Gaussian and the histogram. We analyze the case of mixture densities and exploit the asymptotic equivalence between likelihood and Kullback-Leibler divergence to derive solutions for these models. In particular, (1) we show that the divergence can be computed exactly for vector quantizers and, (2) has an approximate solution for Gaussian mixtures that introduces no significant degradation of the resulting similarity judgments. In both cases, the new solutions have closed-form and computational complexity equivalent to that of standard retrieval approaches, but significantly better retrieval performance.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131943799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
A maximum likelihood framework for iterative eigendecomposition 迭代特征分解的极大似然框架
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937582
A. Robles-Kelly, E. Hancock
{"title":"A maximum likelihood framework for iterative eigendecomposition","authors":"A. Robles-Kelly, E. Hancock","doi":"10.1109/ICCV.2001.937582","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937582","url":null,"abstract":"This paper presents an iterative maximum likelihood framework for perceptual grouping. We pose the problem of perceptual grouping as one of pairwise relational clustering. The method is quite generic and can be applied to a number of problems including region segmentation and line-linking. The task is to assign image tokens to clusters in which there is strong relational affinity between token pairs. The parameters of our model are the cluster memberships and the link weights between pairs of tokens. Commencing from a simple probability distribution for these parameters, we show how they may be estimated using an EM-like algorithm. The cluster memberships are estimated using an eigendecomposition method. Once the cluster memberships are to hand, then the updated link-weights are the expected values of their pairwise products. The new method is demonstrated on region segmentation and line-segment grouping problems where it is shown to outperform a noniterative eigenclustering method.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128905535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
True single view point cone mirror omni-directional catadioptric system 真单视点锥镜全向反射系统
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937610
Shih-Schon Lin, R. Bajcsy
{"title":"True single view point cone mirror omni-directional catadioptric system","authors":"Shih-Schon Lin, R. Bajcsy","doi":"10.1109/ICCV.2001.937610","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937610","url":null,"abstract":"Pinhole camera model is a simplified subset of geometric optics. In special cases like the image formation of the cone (a degenerate conic section) mirror in an omnidirectional view catadioptric system, there are more complex optical phenomena involved that the simple pinhole model can not explain. We show that using the full geometric optics model a true single viewpoint cone mirror omni-directional system can be built. We show how such a system is built first, and then show in detail how each optical phenomenon works together to make the system true single viewpoint. The new system requires only simple off-the-shelf components and still outperforms other single viewpoint omni-systems for many applications.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132726541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Fast algorithm for nearest neighbor search based on a lower bound tree 基于下界树的最近邻快速搜索算法
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937551
Yong-Sheng Chen, Y. Hung, C. Fuh
{"title":"Fast algorithm for nearest neighbor search based on a lower bound tree","authors":"Yong-Sheng Chen, Y. Hung, C. Fuh","doi":"10.1109/ICCV.2001.937551","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937551","url":null,"abstract":"This paper presents a novel algorithm for fast nearest neighbor search. At the preprocessing stage, the proposed algorithm constructs a lower bound tree by agglomeratively clustering the sample points in the database. Calculation of the distance between the query and the sample points can be avoided if the lower bound of the distance is already larger than the minimum distance. The search process can thus be accelerated because the computational cost of the lower bound which can be calculated by using the internal node of the lower bound tree, is less than that of the distance. To reduce the number of the lower bounds actually calculated the winner-update search strategy is used for traversing the tree. Moreover, the query and the sample points can be transformed for further efficiency improvement. Our experiments show that the proposed algorithm can greatly speed up the nearest neighbor search process. When applying to the real database used in Nayar's object recognition system, the proposed algorithm is about one thousand times faster than the exhaustive search.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Colour photometric stereo: simultaneous reconstruction of local gradient and colour of rough textured surfaces 色度立体:同时重建粗糙纹理表面的局部梯度和颜色
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937681
S. Barsky, M. Petrou
{"title":"Colour photometric stereo: simultaneous reconstruction of local gradient and colour of rough textured surfaces","authors":"S. Barsky, M. Petrou","doi":"10.1109/ICCV.2001.937681","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937681","url":null,"abstract":"Classification of a rough 3D surface from 2D images may be difficult due to directional effects introduced by illumination. One possible way of dealing with the problem is to extract the local albedo and gradient surface information which do not depend on the illumination, and classify the texture directly using these intrinsic characteristics. In this paper we present an algorithm for simultaneous recovery of local gradient and colour using multiple photometric images. The algorithm is proven to be optimal in the least squares error sense. Experimental results with real images and comparison with other approaches are also presented.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134365018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Stochastic road shape estimation 随机道路形状估计
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 Pub Date : 2001-07-07 DOI: 10.1109/ICCV.2001.937519
B. Southall, C. J. Taylor
{"title":"Stochastic road shape estimation","authors":"B. Southall, C. J. Taylor","doi":"10.1109/ICCV.2001.937519","DOIUrl":"https://doi.org/10.1109/ICCV.2001.937519","url":null,"abstract":"We describe a new system for estimating road shape ahead of a vehicle for the purpose of driver assistance. The method utilises a single on board colour camera, together with inertial and velocity information, to estimate both the position of the host car with respect to the lane it is following and also the width and curvature of the lane ahead at distances of up to 80 metres. The system's image processing extracts a variety of different styles of lane markings from road imagery, and is able to compensate for a range of lighting conditions. Road shape and car position are estimated using a particle filter. The system, which runs at 10.5 frames per second, has been applied with some success to several hours' worth of data captured from highways under varying imaging conditions.","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133661535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 182
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信