Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition最新文献

筛选
英文 中文
Stride and cadence as a biometric in automatic person identification and verification 步幅和节奏作为生物特征在人的自动识别和验证
Chiraz BenAbdelkader, L. Davis, Ross Cutler
{"title":"Stride and cadence as a biometric in automatic person identification and verification","authors":"Chiraz BenAbdelkader, L. Davis, Ross Cutler","doi":"10.1109/AFGR.2002.1004182","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004182","url":null,"abstract":"Presents a correspondence-free method to automatically estimate the spatio-temporal parameters of gait (stride length and cadence) of a walking person from video. Stride and cadence are functions of body height, weight and gender, and we use these biometrics for identification and verification of people. The cadence is estimated using the periodicity of a walking person. Using a calibrated camera system, the stride length is estimated by first tracking the person and estimating their distance travelled over a period of time. By counting the number of steps (again using periodicity) and assuming constant-velocity walking, we are able to estimate the stride to within 1 cm for a typical outdoor surveillance configuration (under certain assumptions). With a database of 17 people and eight samples of each, we show that a person is verified with an equal error rate (EER) of 11%, and correctly identified with a probability of 40%. This method works with low-resolution images of people and is robust to changes in lighting, clothing and tracking errors. It is view-invariant, though performance is optimal in a near-fronto-parallel configuration.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124030136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 275
Motion-based recognition of people in EigenGait space EigenGait空间中基于动作的人识别
Chiraz BenAbdelkader, L. Davis, Ross Cutler
{"title":"Motion-based recognition of people in EigenGait space","authors":"Chiraz BenAbdelkader, L. Davis, Ross Cutler","doi":"10.1109/AFGR.2002.1004165","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004165","url":null,"abstract":"A motion-based, correspondence-free technique or human gait recognition in monocular video is presented. We contend that the planar dynamics of a walking person are encoded in a 2D plot consisting of the pairwise image similarities of the sequence of images of the person, and that gait recognition can be achieved via standard pattern classification of these plots. We use background modelling to track the person for a number of frames and extract a sequence of segmented images of the person. The self-similarity plot is computed via correlation of each pair of images in this sequence. For recognition, the method applies principal component analysis to reduce the dimensionality of the plots, then uses the k-nearest neighbor rule in this reduced space to classify an unknown person. This method is robust to tracking and segmentation errors, and to variation in clothing and background. It is also invariant to small changes in camera viewpoint and walking speed. The method is tested on outdoor sequences of 44 people with 4 sequences of each taken on two different days, and achieves a classification rate of 77%. It is also tested on indoor sequences of 7 people walking on a treadmill, taken from 8 different viewpoints and on 7 different days. A classification rate of 78% is obtained for near-fronto-parallel views, and 65% on average over all view.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123968290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 210
Detection of people carrying objects : a motion-based recognition approach 携带物体的人的检测:一种基于动作的识别方法
Chiraz BenAbdelkader, L. Davis
{"title":"Detection of people carrying objects : a motion-based recognition approach","authors":"Chiraz BenAbdelkader, L. Davis","doi":"10.1109/AFGR.2002.1004183","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004183","url":null,"abstract":"We describe a method to detect instances of a walking person carrying an object seen from a stationary camera. We take a correspondence-free motion-based recognition approach, that exploits known shape and periodicity cues of the human silhouette shape. Specifically, we subdivide the binary silhouette into four horizontal segments, and analyze the temporal behavior of the bounding box width over each segment. We posit that the periodicity and amplitudes of these time series satisfy certain criteria for a natural walking person, and deviations therefrom are an indication that the person might be carrying an object. The method is tested on 41 360/spl times/240 color outdoor sequences of people walking and carrying objects at various poses and camera viewpoints. A correct detection rate of 85% and a false alarm rate of 12% are obtained.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122323214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Interacting with steerable projected displays 与可操纵的投影显示器交互
R. Kjeldsen, Claudio S. Pinhanez, G. Pingali, J. Hartman, A. Levas, Mark Podlaseck
{"title":"Interacting with steerable projected displays","authors":"R. Kjeldsen, Claudio S. Pinhanez, G. Pingali, J. Hartman, A. Levas, Mark Podlaseck","doi":"10.1109/AFGR.2002.1004187","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004187","url":null,"abstract":"When computer vision is combined with a steerable projector, any surface in an environment can be turned into an interactive interface, without having to modify or wire the surface. Steerable projected displays offer rich opportunities and pose new challenges for interaction based on gesture recognition. In this paper, we present real-time techniques for recognizing \"touch\" and \"point\" gestures on steerable projected displays produced by a new device called the Everywhere Displays projector (ED-projector). We demonstrate the viability of our approach through an experiment involving hundreds of users interacting with projected interfaces.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115656616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Hierarchical wavelet networks for facial feature localization 基于层次小波网络的人脸特征定位
R. Feris, J. Gemmell, K. Toyama, V. Krüger
{"title":"Hierarchical wavelet networks for facial feature localization","authors":"R. Feris, J. Gemmell, K. Toyama, V. Krüger","doi":"10.1109/AFGR.2002.1004143","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004143","url":null,"abstract":"We present a technique for facial feature localization using a two-level hierarchical wavelet network. The first level wavelet network is used for face matching, and yields an affine transformation used for a rough approximation of feature locations. Second level wavelet networks for each feature are then used to fine-tune the feature locations. Construction of a training database containing hierarchical wavelet networks of many faces allows features to be detected in most faces. Experiments show that facial feature localization benefits significantly from the hierarchical approach. Results compare favorably with existing techniques for feature localization.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128599767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 131
Head tracking by active particle filtering 基于有源粒子滤波的头部跟踪
Zhihong Zeng, Songde Ma
{"title":"Head tracking by active particle filtering","authors":"Zhihong Zeng, Songde Ma","doi":"10.1109/AFGR.2002.1004137","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004137","url":null,"abstract":"Particle filtering has attracted much attention due to its robust tracking performance in clutter. However, a price to pay for its robustness is the computational cost. Active particle filtering is proposed in this paper. Unlike traditional particle filtering, every particle in active particle filtering is first driven to its local maximum of the likelihood before it is weighted. In this case, the efficiency of every particle is improved and the number of required particles is greatly reduced. Actually, the number of particles in the active particle filtering is based more on the cluttered degree of the environment and the fitting range of every particle than on the size of the model's configuration space. Extensive experimental results show that the tracker is efficient and robust in tracking a head undergoing translation and full 360/spl deg/ out-of-plane rotation with partial occlusion in cluttered environments.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130327767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
On probabilistic combination of face and gait cues for identification 人脸与步态线索的概率组合识别
Gregory Shakhnarovich, Trevor Darrell
{"title":"On probabilistic combination of face and gait cues for identification","authors":"Gregory Shakhnarovich, Trevor Darrell","doi":"10.1109/AFGR.2002.1004151","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004151","url":null,"abstract":"We approach the task of person identification based on face and gait cues. The cues are derived from multiple simultaneous camera views, combined through the visual hull algorithm to create imagery in canonical pose prior to recognition. These view-normalized sequences, containing frontal images of face and profile silhouettes, are separately used for face and gait recognition, and the results may be combined using a range of strategies. We discuss the issues of cross-modal correlation and score transformations for different modalities, present the probabilistic settings for the cross-modal fusion and explore several common fusion approaches. The effectiveness of various strategies is evaluated on a data set with 26 subjects. We hope that the discussion presented in this paper may be useful in developing further statistical frameworks for multi-modal recognition.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130586281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
A modification of kernel-based Fisher discriminant analysis for face detection 基于核的Fisher判别分析在人脸检测中的改进
Takio Kurita, Toshiharu Taguchi
{"title":"A modification of kernel-based Fisher discriminant analysis for face detection","authors":"Takio Kurita, Toshiharu Taguchi","doi":"10.1109/AFGR.2002.1004170","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004170","url":null,"abstract":"Presents a modification of kernel-based Fisher discriminant analysis (FDA) for face detection. In a face detection problem, it is important to design a two-category classifier which can decide whether the given input sub-image is a face or not. There is a difficulty with training such two-category classifiers because the \"non-face\" class includes many images of different kinds of objects, and it is difficult to treat them all as a single class. Also, the dimension of the discriminant space constructed by the usual FDA is limited to one for two-category classification. To overcome these problems with the usual FDA, the discriminant criterion of the usual FDA is modified such that the covariance of the \"face\" class is minimized while the differences between the center of the \"face\" class and each training sample of the \"non-face\" class are maximized. By this modification, we can obtain a higher-dimensional discriminant space which is suitable for \"face/non-face\" classification. It is shown that the proposed method can outperform a support vector machine (SVM) by \"face/non-face\" classification experiments using the face images gathered from the available face databases and the many face images on the Web.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134206262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Kernel Eigenfaces vs. Kernel Fisherfaces: Face recognition using kernel methods Kernel Eigenfaces vs. Kernel Fisherfaces:使用Kernel方法进行人脸识别
Ming-Hsuan Yang
{"title":"Kernel Eigenfaces vs. Kernel Fisherfaces: Face recognition using kernel methods","authors":"Ming-Hsuan Yang","doi":"10.1109/FGR.2002.10001","DOIUrl":"https://doi.org/10.1109/FGR.2002.10001","url":null,"abstract":"Principal Component A nalysis and Fisher Linear Discriminant methods have demonstrated their success in fac edete ction, r ecognition and tr acking. The representations in these subspace methods are based on second order statistics of the image set, and do not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics and Independent Component Analysis (ICA) have been used as informative representations for visual recognition. In this paper, we investigate the use of Kernel Principal Component Analysis and Kernel Fisher Linear Discriminant for learning low dimensional representations for face recognition, which we call Kernel Eigenface and Kernel Fisherface methods.While Eigenface and Fisherface methods aim to find projection directions based on second order correlation of samples, Kernel Eigenface and Kernel Fisherface methods provide generalizations which take higher order correlations into account. We compare the performance of kernel methods with classical algorithms such as Eigenface, Fisherface, ICA, and Support Vector Machine (SVM) within the context of appearance-based face recognition problem using two data sets where images vary in pose, scale, lighting and expression. Experimental results show that kernel methods provide better representations and achieve lower error rates for face recognition.","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130818124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 834
Facial asymmetry quantification for expression invariant human identification 面部不对称量化用于表情不变人的识别
Yanxi Liu, Karen L. Schmidt, J. Cohn, S. Mitra
{"title":"Facial asymmetry quantification for expression invariant human identification","authors":"Yanxi Liu, Karen L. Schmidt, J. Cohn, S. Mitra","doi":"10.1109/AFGR.2002.1004156","DOIUrl":"https://doi.org/10.1109/AFGR.2002.1004156","url":null,"abstract":"We investigate the effect of quantified statistical facial asymmetry as a biometric under expression variations. Our findings show that the facial asymmetry measures (AsymFaces) are computationally feasible, containing discriminative information and providing synergy when combined with Fisherface and Eigen-face methods on image data of two publically available face databases (Cohn-Kanade (T. Kanade et al., 1999) and Feret (P.J. Phillips et al., 1998)).","PeriodicalId":364299,"journal":{"name":"Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129355138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 154
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信