7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

筛选
英文 中文
Framework for a portable gesture interface 用于便携式手势界面的框架
Sébastien Wagner, B. Alefs, C. Picus
{"title":"Framework for a portable gesture interface","authors":"Sébastien Wagner, B. Alefs, C. Picus","doi":"10.1109/FGR.2006.54","DOIUrl":"https://doi.org/10.1109/FGR.2006.54","url":null,"abstract":"Gesture recognition is a valuable extension for interaction with portable devices. This paper presents a framework for interaction by hand gestures using a head mounted camera system. The framework includes automatic activation using AdaBoost hand detection, tracking of chromatic and luminance color modes based on adaptive mean shift and pose recognition using template matching of the polar histogram. The system achieves 95% detection rate and 96% classification accuracy at real time processing, for a non-static camera setup and cluttered background","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131893842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Graph embedded analysis for head pose estimation 头部姿态估计的图嵌入分析
Yun Fu, Thomas S. Huang
{"title":"Graph embedded analysis for head pose estimation","authors":"Yun Fu, Thomas S. Huang","doi":"10.1109/FGR.2006.60","DOIUrl":"https://doi.org/10.1109/FGR.2006.60","url":null,"abstract":"Head pose is an important vision cue for scene interpretation and human computer interaction. To determine the head pose, one may consider the low-dimensional manifold structure of the face view points in image space. In this paper, we present an appearance-based strategy for head pose estimation using supervised graph embedding (GE) analysis. Thinking globally and fitting locally, we first construct the neighborhood weighted graph in the sense of supervised LLE. The unified projection is calculated in a closed-form solution based on the GE linearization. We then project new data (face view images) into the embedded low-dimensional subspace with the identical projection. The head pose is finally estimated by the K-nearest neighbor classification. We test the proposed method on 18,100 USF face view images. Experimental results show that, even using a very small training set (e.g. 10 subjects), GE achieves higher head pose estimation accuracy with more efficient dimensionality reduction than the existing methods","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134383528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
A multiview face identification model with no geometric constraints 无几何约束的多视角人脸识别模型
Jerry Jun Yokono, T. Poggio
{"title":"A multiview face identification model with no geometric constraints","authors":"Jerry Jun Yokono, T. Poggio","doi":"10.1109/FGR.2006.12","DOIUrl":"https://doi.org/10.1109/FGR.2006.12","url":null,"abstract":"Face identification systems relying on local descriptors are increasingly used because of their perceived robustness with respect to occlusions and to global geometrical deformations. Descriptors of this type - based on a set of oriented Gaussian derivative filters - are used in our identification system. In this paper, we explore a pose-invariant multiview face identification system that does not use explicit geometrical information. The basic idea of the approach is to find discriminant features to describe a face across different views. A boosting procedure is used to select features out of a large feature pool of local features collected from the positive training examples. We describe experiments on well-known, though small, face databases with excellent recognition rate","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114629443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Hand Posture Classification and Recognition using the Modified Census Transform 基于改进普查变换的手部姿势分类与识别
Agnès Just, Yann Rodriguez, S. Marcel
{"title":"Hand Posture Classification and Recognition using the Modified Census Transform","authors":"Agnès Just, Yann Rodriguez, S. Marcel","doi":"10.1109/FGR.2006.62","DOIUrl":"https://doi.org/10.1109/FGR.2006.62","url":null,"abstract":"Developing new techniques for human-computer interaction is very challenging. Vision-based techniques have the advantage of being unobtrusive and hands are a natural device that can be used for more intuitive interfaces. But in order to use hands for interaction, it is necessary to be able to recognize them in images. In this paper, we propose to apply to the hand posture classification and recognition tasks an approach that has been successfully used for face detection (B. Froba and A. Ernst, 2004). The features are based on the modified census transform and are illumination invariant. For the classification and recognition processes, a simple linear classifier is trained, using a set of feature lookup-tables. The database used for the experiments is a benchmark database in the field of posture recognition. Two protocols have been defined. We provide results following these two protocols for both the classification and recognition tasks. Results are very encouraging","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114738873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
AAM derived face representations for robust facial action recognition AAM衍生的人脸表示鲁棒面部动作识别
S. Lucey, I. Matthews, Changbo Hu, Z. Ambadar, F. D. L. Torre, J. Cohn
{"title":"AAM derived face representations for robust facial action recognition","authors":"S. Lucey, I. Matthews, Changbo Hu, Z. Ambadar, F. D. L. Torre, J. Cohn","doi":"10.1109/FGR.2006.17","DOIUrl":"https://doi.org/10.1109/FGR.2006.17","url":null,"abstract":"In this paper, we present results on experiments employing active appearance model (AAM) derived facial representations, for the task of facial action recognition. Experimental results demonstrate the benefit of AAM-derived representations on a spontaneous AU database containing \"real-world\" variation. Additionally, we explore a number of normalization methods for these representations which increase facial action recognition performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
A 3D facial expression database for facial behavior research 用于面部行为研究的三维面部表情数据库
L. Yin, Xiaozhou Wei, Yi Sun, Jun Wang, Matthew J. Rosato
{"title":"A 3D facial expression database for facial behavior research","authors":"L. Yin, Xiaozhou Wei, Yi Sun, Jun Wang, Matthew J. Rosato","doi":"10.1109/FGR.2006.6","DOIUrl":"https://doi.org/10.1109/FGR.2006.6","url":null,"abstract":"Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1285
Facial Expression Classification using Gabor and Log-Gabor Filters 基于Gabor和Log-Gabor滤波器的面部表情分类
Nectarios Rose
{"title":"Facial Expression Classification using Gabor and Log-Gabor Filters","authors":"Nectarios Rose","doi":"10.1109/FGR.2006.49","DOIUrl":"https://doi.org/10.1109/FGR.2006.49","url":null,"abstract":"Facial expression classification has achieved good results in the past using manually extracted facial points convolved with Gabor filters. In this paper, classification performance was tested on feature vectors composed of facial points convolved with Gabor and log-Gabor filters, as well as with whole image pixel representation of static facial images. Principal component analysis was performed on these feature vectors, and classification accuracies compared using linear discriminant analysis. Experiments carried out on two databases show comparable performance between Gabor and log-Gabor filters, with a classification accuracy of around 85%. This was achieved on low-resolution images, without the need to precisely locate facial points on each face image","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123979870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Multi-scale primal feature based facial expression modeling and identification 基于多尺度原始特征的面部表情建模与识别
L. Yin, Xiaozhou Wei
{"title":"Multi-scale primal feature based facial expression modeling and identification","authors":"L. Yin, Xiaozhou Wei","doi":"10.1109/FGR.2006.80","DOIUrl":"https://doi.org/10.1109/FGR.2006.80","url":null,"abstract":"In this paper, we present our newly developed face expression modeling system for expression analysis and identification. Given a face image at a front view, a realistic facial model is created using our extended topographic analysis and model instantiation approach. Our facial expression modeling system consists of two major components: (1) facial feature representation using the coarse-to-fine multiscale topographic primitive features and (2) an adaptive generic model individualization process based on the primal facial surface feature context. The algorithms have been tested using both static images and facial expression sequences. The usefulness of the generated expression models is validated by our 3D facial expression analysis algorithm. The accuracy of the generated expression model is evaluated by the comparison between the generated models and the range models obtained by a 3D digitizer","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Local Linear Regression (LLR) for Pose Invariant Face Recognition 局部线性回归(LLR)用于姿态不变人脸识别
Xiujuan Chai, S. Shan, Xilin Chen, Wen Gao
{"title":"Local Linear Regression (LLR) for Pose Invariant Face Recognition","authors":"Xiujuan Chai, S. Shan, Xilin Chen, Wen Gao","doi":"10.1109/FGR.2006.73","DOIUrl":"https://doi.org/10.1109/FGR.2006.73","url":null,"abstract":"The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is well known as one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given non-frontal view to obtain a virtual gallery/probe face. By formulating this kind of solutions as a prediction problem, this paper proposes a simple but efficient novel local linear regression (LLR) method, which can generate the virtual frontal view from a given non-frontal face image. The proposed LLR inspires from the observation that the corresponding local facial regions of the frontal and non-frontal view pair satisfy linear assumption much better than the whole face region. This can be explained easily by the fact that a 3D face shape is composed of many local planar surfaces, which satisfy naturally linear model under imaging projection. In LLR, we simply partition the whole non-frontal face image into multiple local patches and apply linear regression to each patch for the prediction of its virtual frontal patch. Comparing with other methods, the experimental results on CMU PIE database show distinct advantage of the proposed method","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"94 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Towards Automatic Body Language Annotation 走向自动肢体语言注释
P. Chippendale
{"title":"Towards Automatic Body Language Annotation","authors":"P. Chippendale","doi":"10.1109/FGR.2006.105","DOIUrl":"https://doi.org/10.1109/FGR.2006.105","url":null,"abstract":"This paper describes a real-time system developed for the derivation of low-level visual cues targeted at the recognition of simple hand, head and body gestures. A novel, adaptive background subtraction technique is presented together with a tool for monitoring repetitive movements, e.g. fidgeting. To monitor subtle body movements in an unconstrained environment, active cameras with pan, tilt and zoom capabilities must be employed to track an individual's actions more closely. This paper then explores a means of detecting small and large scale human activity within images produced from active cameras that may be reoriented during monitoring","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信