7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

筛选
英文 中文
Robust method for real-time, continuous, 3D detection of obstructed faces in indoors environments 在室内环境中实时、连续、三维检测遮挡人脸的鲁棒方法
S. Grange, C. Baur
{"title":"Robust method for real-time, continuous, 3D detection of obstructed faces in indoors environments","authors":"S. Grange, C. Baur","doi":"10.1109/FGR.2006.97","DOIUrl":"https://doi.org/10.1109/FGR.2006.97","url":null,"abstract":"We address the need for robust detection of obstructed human features in complex environments, with a focus on intelligent surgical UIs. In our setup, real-time detection is used to find features without the help of local (spatial or temporal) information. Such a detector is used to validate, correct or reject the output of the visual feature tracking, which is locally more robust, but drifts over time. In operating rooms (OR), surgeon faces are typically obstructed by sterile clothing and tools, making statistical and/or feature-based face detection approaches ineffective. We propose a new method for face detection that relies on geometric information from disparity maps, locally refined by color processing. We have applied our method to a surgical mock-up scene, as well as to images gathered during real surgery. Running in a real-time, continuous detection loop, our detector successfully found 99% of target heads (0.1% false positive) in our simulated setup, and 98% of target heads (0.5% false positive) in the surgical theater","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Accurate face localisation for faces under active near-IR illumination 主动近红外照明下的人脸精确定位
X. Zou, J. Kittler, K. Messer
{"title":"Accurate face localisation for faces under active near-IR illumination","authors":"X. Zou, J. Kittler, K. Messer","doi":"10.1109/FGR.2006.18","DOIUrl":"https://doi.org/10.1109/FGR.2006.18","url":null,"abstract":"In this paper we propose a novel approach to accurate face localisation for faces under near-infrared (near-IR) illumination. The circular shape of the bright pupils is a scale and rotation invariant feature which is exploited to quickly detect pupil candidates. As the first step of face localisation, a rule-based pupil detector is employed to find candidate pupil edges from the edge map. Candidate eye centres for each eye are selected from the neighborhood of corresponding pupil regions and sorted based on the similarity to eye templates. Two support vector machine (SVM) classifiers based on eye appearance are employed to validate those candidates for each eye individually. Finally candidates are further validated in pair by an SVM classifier based on global face appearance. In the experiment on a near-IR face database with 40 subjects and 48 images per subject, 96.5% images are accurately localised using the proposed approach","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A landmark paper in face recognition 人脸识别领域具有里程碑意义的论文
G. M. Beumer, Q. Tao, A. Bazen, R. Veldhuis
{"title":"A landmark paper in face recognition","authors":"G. M. Beumer, Q. Tao, A. Bazen, R. Veldhuis","doi":"10.1109/FGR.2006.10","DOIUrl":"https://doi.org/10.1109/FGR.2006.10","url":null,"abstract":"Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the most likely-landmark locator (MLLL), based on maximizing the likelihood ratio, and (2) Viola-Jones detection. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method gives the best verification performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134354965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The isometric self-organizing map for 3D hand pose estimation 三维手部姿态估计的等距自组织图
Haiying Guan, R. Feris, M. Turk
{"title":"The isometric self-organizing map for 3D hand pose estimation","authors":"Haiying Guan, R. Feris, M. Turk","doi":"10.1109/FGR.2006.103","DOIUrl":"https://doi.org/10.1109/FGR.2006.103","url":null,"abstract":"We propose an isometric self-organizing map (ISO-SOM) method for nonlinear dimensionality reduction, which integrates a self-organizing map model and an ISOMAP dimension reduction algorithm, organizing the high dimension data in a low dimension lattice structure. We apply the proposed method to the problem of appearance-based 3D hand posture estimation. As a learning stage, we use a realistic 3D hand model to generate data encoding the mapping between the hand pose space and the image feature space. The intrinsic dimension of such nonlinear mapping is learned by ISOSOM, which clusters the data into a lattice map. We perform 3D hand posture estimation on this map, showing that the ISOSOM algorithm performs better than traditional image retrieval algorithms for pose estimation. We also show that a 2.5D feature representation based on depth edges is clearly superior to intensity edge features commonly used in previous methods","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128894052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Face Recognition from a Tabula Rasa Perspective 从白板角度的人脸识别
M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, M. Hernández-Tejera
{"title":"Face Recognition from a Tabula Rasa Perspective","authors":"M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, M. Hernández-Tejera","doi":"10.1109/FGR.2006.44","DOIUrl":"https://doi.org/10.1109/FGR.2006.44","url":null,"abstract":"In this paper a system for face recognition from a tabula rasa (i.e. blank slate) perspective is described. A priori, the system has the only ability to detect automatically faces and represent them in a space of reduced dimension. Later, the system is exposed to over 400 different identities, observing its recognition performance evolution. The preliminary results achieved indicate on the one side that the system is able to reject most of unknown individuals after an initialization stage. On the other side the ability to recognize known individuals (or revisitors) is still far from being reliable. However, the observation of the recognition evolution results for individuals frequently met suggests that the more meetings are held, the lower recognition error is achieved","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic gesture recognition for intelligent human-robot interaction 用于智能人机交互的自动手势识别
Seong-Whan Lee
{"title":"Automatic gesture recognition for intelligent human-robot interaction","authors":"Seong-Whan Lee","doi":"10.1109/FGR.2006.25","DOIUrl":"https://doi.org/10.1109/FGR.2006.25","url":null,"abstract":"An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRl). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126040802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Facial features extraction in color images using enhanced active shape model 基于增强主动形状模型的彩色图像人脸特征提取
M. Mahoor, M. Abdel-Mottaleb
{"title":"Facial features extraction in color images using enhanced active shape model","authors":"M. Mahoor, M. Abdel-Mottaleb","doi":"10.1109/FGR.2006.51","DOIUrl":"https://doi.org/10.1109/FGR.2006.51","url":null,"abstract":"In this paper, we present an improved active shape model (ASM) for facial feature extraction. The original ASM method developed by Cootes et al. highly relies on the initialization and the representation of the local structure of the facial features in the image. We use color information to improve the ASM approach for facial feature extraction. The color information is used to localize the centers of the mouth and the eyes to assist the initialization step. Moreover, we model the local structure of the feature points in the RGB color space. Besides, we use 2D affine transformation to align facial features that are perturbed by head pose variations. In fact, the 2D affine transformation compensates for the effects of both head pose variations and the projection of 3D data to 2D. Experiments on a face database of 50 subjects show that our approach outperforms the standard ASM and is successful in facial feature extraction","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Human action recognition using multi-view image sequences 基于多视图图像序列的人体动作识别
Mohiudding Ahmad, Seong-Whan Lee
{"title":"Human action recognition using multi-view image sequences","authors":"Mohiudding Ahmad, Seong-Whan Lee","doi":"10.1109/FGR.2006.65","DOIUrl":"https://doi.org/10.1109/FGR.2006.65","url":null,"abstract":"Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132574206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Automatic feature extraction for multiview 3D face recognition 多视角三维人脸识别的自动特征提取
Xiaoguang Lu, Anil K. Jain
{"title":"Automatic feature extraction for multiview 3D face recognition","authors":"Xiaoguang Lu, Anil K. Jain","doi":"10.1109/FGR.2006.23","DOIUrl":"https://doi.org/10.1109/FGR.2006.23","url":null,"abstract":"Current 2D face recognition systems encounter difficulties in recognizing faces with large pose variations. Utilizing the pose-invariant features of 3D face data has the potential to handle multiview face matching. A feature extractor based on the directional maximum is proposed to estimate the nose tip location and the pose angle simultaneously. A nose profile model represented by subspaces is used to select the best candidates for the nose tip. Assisted by a statistical feature location model, a multimodal scheme is presented to extract eye and mouth corners. Using the automatic feature extractor, a fully automatic 3D face recognition system is developed. The system is evaluated on two databases, the MSU database (300 multiview test scans from 100 subjects) and the UND database (953 near frontal scans from 277 subjects). The automatic system provides recognition accuracy that is comparable to the accuracy of a system with manually labeled feature points","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132759517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 142
A layered deformable model for gait analysis 一种用于步态分析的分层变形模型
Haiping Lu, K. Plataniotis, A. Venetsanopoulos
{"title":"A layered deformable model for gait analysis","authors":"Haiping Lu, K. Plataniotis, A. Venetsanopoulos","doi":"10.1109/FGR.2006.11","DOIUrl":"https://doi.org/10.1109/FGR.2006.11","url":null,"abstract":"In this paper, a layered deformable model (LDM) is proposed for human body pose recovery in gait analysis. This model is inspired by the manually labeled silhouettes in (Z. Liu, et al., July 2004) and it is designed to closely match them. For fronto-parallel gait, the introduced LDM model defines the body part widths and lengths, the position and the joint angles of human body using 22 parameters. The model consists of four layers and allows for limb deformation. With this model, our objective is to recover its parameters (and thus the human body pose) from automatically extracted silhouettes. LDM recovery algorithm is first developed for manual silhouettes, in order to generate ground truth sequences for comparison and useful statistics regarding the LDM parameters. It is then extended for automatically extracted silhouettes. The proposed methodologies have been tested on 10005 frames from 285 gait sequences captured under various conditions and an average error rate of 7% is achieved for the lower limb joint angles of all the frames, showing great potential for model-based gait recognition","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133215871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信