7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

筛选
英文 中文
Evaluating error functions for robust active appearance models 评估鲁棒主动外观模型的误差函数
B. Theobald, I. Matthews, Simon Baker
{"title":"Evaluating error functions for robust active appearance models","authors":"B. Theobald, I. Matthews, Simon Baker","doi":"10.1109/FGR.2006.38","DOIUrl":"https://doi.org/10.1109/FGR.2006.38","url":null,"abstract":"Active appearance models (AAMs) are generative parametric models commonly used to track faces in video sequences. A limitation of AAMs is they are not robust to occlusion. A recent extension reformulated the search as an iteratively re-weighted least-squares problem. In this paper we focus on the choice of error function for use in a robust AAM search. We evaluate eight error functions using two performance metrics: accuracy of occlusion detection and fitting robustness. We show for any reasonable error function the performance in terms of occlusion detection is the same. However, this does not mean that fitting performance is the same. We describe experiments for measuring fitting robustness for images containing real occlusion. The best approach assumes the residuals at each pixel are Gaussianally distributed, then estimates the parameters of the distribution from images that do not contain occlusion. In each iteration of the search, the error image is used to sample these distributions to obtain the pixel weights","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121287402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Estimation of Anthropomeasures from a Single Calibrated Camera 从单个校准相机估算人类测量值
Chiraz BenAbdelkader, L. Davis
{"title":"Estimation of Anthropomeasures from a Single Calibrated Camera","authors":"Chiraz BenAbdelkader, L. Davis","doi":"10.1109/FGR.2006.37","DOIUrl":"https://doi.org/10.1109/FGR.2006.37","url":null,"abstract":"We are interested in the recovery of anthropometric dimensions of the human body from calibrated monocular sequences, and their use in multi-target tracking across multiple cameras and identification of individual people. In this paper, we focus on two specific anthropomeasures that are relatively easy to estimate from low-resolution images: stature and shoulder breadth. Precise average estimates are obtained for each anthropomeasure by combining measurements from multiple frames in the sequence. Our contribution is two-fold: (i) a novel technique for automatic and passive estimation of shoulder breadth, that is based on modelling the shoulders as an ellipse, and (U) a novel method for increasing the accuracy of the mean estimates of both anthropomeasures. The latter is based on the observation that major sources of error in the measurements are landmark localization the 2D image and 3D modelling error, and that both of these are correlated with gait phase and body orientation with respect to the camera. Consequently, estimation error can be significantly reduced via appropriate selection or control of these two variables","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127791229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Preliminary Face Recognition Grand Challenge Results 人脸识别大挑战初步结果
P. Phillips, P. Flynn, W. T. Scruggs, K. Bowyer, W. Worek
{"title":"Preliminary Face Recognition Grand Challenge Results","authors":"P. Phillips, P. Flynn, W. T. Scruggs, K. Bowyer, W. Worek","doi":"10.1109/FGR.2006.87","DOIUrl":"https://doi.org/10.1109/FGR.2006.87","url":null,"abstract":"The goal of the face recognition grand challenge (FRGC) is to improve the performance of face recognition algorithms by an order of magnitude over the best results in face recognition vendor test (FRVT) 2002. The FRGC is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with a data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper presents preliminary results of the FRGC for all six experiments. The preliminary results indicate that significant progress has been made towards achieving the stated goals","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122906078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 202
Reliable and fast tracking of faces under varying pose 可靠、快速地跟踪不同姿态下的人脸
Tao Yang, S. Li, Q. Pan, Jing Li, Chunhui Zhao
{"title":"Reliable and fast tracking of faces under varying pose","authors":"Tao Yang, S. Li, Q. Pan, Jing Li, Chunhui Zhao","doi":"10.1109/FGR.2006.92","DOIUrl":"https://doi.org/10.1109/FGR.2006.92","url":null,"abstract":"This paper presents a system that is able to track multiple faces under varying pose (tilted and rotated) reliably in real-time. The system consists of two interactive modules. The first module performs detection of face subject to rotations. The second does online learning based face tracking. A mechanism of switching between the two modules is embedded into the system to automatically decide the best strategy for reliable tracking. The mechanism enables smooth transit between the detection and tracking module when one of them gives no results or unreliable results. Results demonstrate that the system can make reliable real-time tracking of multiple faces in complex background under out-of-plane rotation, up to 90 degree tilting, fast nonlinear motion, partial occlusion, large scale changes, and camera motion","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114562884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Gait tracking and recognition using person-dependent dynamic shape model 基于人相关动态形状模型的步态跟踪与识别
Chan-Su Lee, A. Elgammal
{"title":"Gait tracking and recognition using person-dependent dynamic shape model","authors":"Chan-Su Lee, A. Elgammal","doi":"10.1109/FGR.2006.58","DOIUrl":"https://doi.org/10.1109/FGR.2006.58","url":null,"abstract":"The characteristics of the 2D shape deformation in human motion contain rich information for human identification and pose estimation. In this paper, we introduce a framework for simultaneous gait tracking and recognition using person-dependent global shape deformation model. Person-dependent global shape deformations are modeled using a nonlinear generative model with kinematic manifold embedding and kernel mapping. The kinematic manifold is used as a common representation of body pose dynamics in different people in a low dimensional space. Shape style as well as geometric transformation and body pose are estimated within a Bayesian framework using the generative model of global shape deformation. Experimental results show person-dependent synthesis of global shape deformation, gait recognition from extracted silhouettes using style parameters, and simultaneous gait tracking and recognition from image edges","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114755365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Robust Spotting of Key Gestures from Whole Body Motion Sequence 从整个身体运动序列的关键手势鲁棒定位
Hee-Deok Yang, A-Yeon Park, Seong-Whan Lee
{"title":"Robust Spotting of Key Gestures from Whole Body Motion Sequence","authors":"Hee-Deok Yang, A-Yeon Park, Seong-Whan Lee","doi":"10.1109/FGR.2006.99","DOIUrl":"https://doi.org/10.1109/FGR.2006.99","url":null,"abstract":"Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114832339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Using Feature Combination and Statistical Resampling for Accurate Face Recognition Based on Frequency Domain Representation of Facial Asymmetry 基于人脸不对称频域表示的特征组合和统计重采样精确识别
S. Mitra, M. Savvides
{"title":"Using Feature Combination and Statistical Resampling for Accurate Face Recognition Based on Frequency Domain Representation of Facial Asymmetry","authors":"S. Mitra, M. Savvides","doi":"10.1109/FGR.2006.109","DOIUrl":"https://doi.org/10.1109/FGR.2006.109","url":null,"abstract":"This paper explores the efficiency of facial asymmetry in face identification tasks using a frequency domain representation. Satisfactory results are obtained for two different tasks, namely, human identification under extreme expression variations and expression classification, using a PCA-type classifier on a database with 55 individuals, which establishes the robustness of these measures to intra-personal distortions. Furthermore, we demonstrate that it is possible to improve upon these results significantly by simple means such as feature set combination and statistical resampling methods like bagging and random subspace method (RSM) using the same PCA-type base classifier. This even succeeds in attaining perfect classification results with 100% accuracy in some cases. Moreover, both these methods require few additional resources (computing time and power), hence they are useful for practical applications as well and help establish the effectiveness of frequency domain representation of facial asymmetry in automatic identification tasks","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127827802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Skin Segmentation for Gesture Recognition Combining Region and Support Vector Machine Active Learning 结合区域和支持向量机主动学习的手势识别自动皮肤分割
Junwei Han, G. Awad, Alistair Sutherland, Hai Wu
{"title":"Automatic Skin Segmentation for Gesture Recognition Combining Region and Support Vector Machine Active Learning","authors":"Junwei Han, G. Awad, Alistair Sutherland, Hai Wu","doi":"10.1109/FGR.2006.27","DOIUrl":"https://doi.org/10.1109/FGR.2006.27","url":null,"abstract":"Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124667267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Learning to identify facial expression during detection using Markov decision process 学习识别面部表情在检测中使用马尔可夫决策过程
Ramana Isukapalli, A. Elgammal, R. Greiner
{"title":"Learning to identify facial expression during detection using Markov decision process","authors":"Ramana Isukapalli, A. Elgammal, R. Greiner","doi":"10.1109/FGR.2006.71","DOIUrl":"https://doi.org/10.1109/FGR.2006.71","url":null,"abstract":"While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection methods use a Viola-Jones style \"cascade\" of Adaboost-based classifiers to detect faces. We demonstrate that faces with similar expression form \"clusters\" in a \"classifier space\" defined by the real-valued outcomes of these classifiers on the images and address the task of using these classifiers to classify a new image into the appropriate cluster (expression). We formulate this as a Markov decision process and use dynamic programming to find an optimal policy - here a decision tree whose internal nodes each correspond to some classifier, whose arcs correspond to ranges of classifier values, and whose leaf nodes each correspond to a specific facial expression, augmented with a sequence of additional classifiers. We present empirical results that demonstrate that our system accurately determines the expression on a face during detection","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"78 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129634884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fully Automatic Facial Action Recognition in Spontaneous Behavior 自发行为中的全自动面部动作识别
M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, Ian R. Fasel, J. Movellan
{"title":"Fully Automatic Facial Action Recognition in Spontaneous Behavior","authors":"M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, Ian R. Fasel, J. Movellan","doi":"10.1109/FGR.2006.55","DOIUrl":"https://doi.org/10.1109/FGR.2006.55","url":null,"abstract":"We present results on a user independent fully automatic system for real time recognition of facial actions from the facial action coding system (FACS). The system automatically detects frontal faces in the video stream and codes each frame with respect to 20 action units. We present preliminary results on a task of facial action detection in spontaneous expressions during discourse. Support vector machines and AdaBoost classifiers are compared. For both classifiers, the output margin predicts action unit intensity","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131063831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 325
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信