Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)最新文献

筛选
英文 中文
Tracking a person with 3-D motion by integrating optical flow and depth 通过整合光流和深度来跟踪具有三维运动的人
R. Okada, Y. Shirai, J. Miura
{"title":"Tracking a person with 3-D motion by integrating optical flow and depth","authors":"R. Okada, Y. Shirai, J. Miura","doi":"10.1109/AFGR.2000.840656","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840656","url":null,"abstract":"This paper describes a method of tracking a person with 3D translation and rotation by integrating optical flow and depth. The target region is first extracted based on the probability of each pixel belonging to the target person. The target state (3D position, posture, motion) is estimated based on the shape and the position of the target region in addition to optical flow and depth. Multiple target states are maintained when the image measurements give rise to ambiguities about the target state. Experimental results with real image sequences show the effectiveness of our method.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129877919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Face recognition algorithms as models of human face processing 人脸识别算法作为人脸处理的模型
A. O’Toole, Y. Cheng, B. Ross, Heather A. Wild, P. Phillips
{"title":"Face recognition algorithms as models of human face processing","authors":"A. O’Toole, Y. Cheng, B. Ross, Heather A. Wild, P. Phillips","doi":"10.1109/AFGR.2000.840689","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840689","url":null,"abstract":"We evaluated the adequacy of computational algorithms as models of human face processing by looking at how the algorithms and humans process individual faces. By comparing model- and human-generated measures of the similarity between pairs of faces, we were able to assess the accord between several automatic face recognition algorithms and human perceivers. Multidimensional scaling (MDS) was used to create a spatial representation of the subject response patterns. Next, the model response patterns were projected into this space. The results revealed a common bimodal structure for both the subjects and for most of the models. The bimodal subject structure reflected strategy differences in making similarity decisions. For the models, the bimodal structure was related to combined aspects of the representations and the distance metrics used in the implementations.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116992665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Bimodal emotion recognition 双峰情绪识别
L. D. Silva, Pei Chi Ng
{"title":"Bimodal emotion recognition","authors":"L. D. Silva, Pei Chi Ng","doi":"10.1109/AFGR.2000.840655","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840655","url":null,"abstract":"This paper describes the use of statistical techniques and hidden Markov models (HMM) in the recognition of emotions. The method aims to classify 6 basic emotions (anger, dislike, fear, happiness, sadness and surprise) from both facial expressions (video) and emotional speech (audio). The emotions of 2 human subjects were recorded and analyzed. The findings show that the audio and video information can be combined using a rule-based system to improve the recognition rate.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121374012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 126
Crane gesture recognition using pseudo 3-D hidden Markov models 基于伪三维隐马尔可夫模型的起重机手势识别
Stefan Müller, S. Eickeler, G. Rigoll
{"title":"Crane gesture recognition using pseudo 3-D hidden Markov models","authors":"Stefan Müller, S. Eickeler, G. Rigoll","doi":"10.1109/AFGR.2000.840665","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840665","url":null,"abstract":"A recognition technique based on novel pseudo 3D hidden Markov models, which can integrate spatial as well as temporal derived features is presented. The approach allows the recognition of dynamic gestures such as waving hands as well as static gestures such as standing in a special pose. Pseudo 3D hidden Markov models (P3DHMM) are an extension of the pseudo 2D case, which has been successfully used for the classification of images and the recognition of faces. In the P3DHMM case the so-called superstates contain P2DHMM and thus whole image sequences can be generated by these models. Our approach has been evaluated on a crane signal database, which consists of 12 different predefined gestures for maneuvering cranes.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123353035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Estimation of the illuminant colour from human skin colour 根据人的肤色估计光源颜色
M. Störring, H. J. Andersen, E. Granum
{"title":"Estimation of the illuminant colour from human skin colour","authors":"M. Störring, H. J. Andersen, E. Granum","doi":"10.1109/AFGR.2000.840613","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840613","url":null,"abstract":"Colour is an important and useful feature for object tracking and recognition in computer vision. However, it has the difficulty that the colour of the object changes if the illuminant colour changes. But under known illuminant colour it becomes a robust feature. There are more and more computer vision applications tracking humans, for example in interfaces for human computer interaction or automatic camera men, where skin colour is an often-used feature. Hence, it would be of significant importance to know the illuminant colour in such applications. This paper proposes a novel method to estimate the current illuminant colour from skin colour observations. The method is based on a physical model of reflections, the assumption that illuminant colours are located close to the Planckian locus, and the knowledge about the camera parameters. The method is empirically tested using real images. The average estimation error of the correlated colour temperature is as small as 180 K. Applications are for example in colour-based tracking to adapt to changes in lighting and in visualisation to re-render image colours to their appearance under canonical viewing conditions.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124125704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Support vector regression and classification based multi-view face detection and recognition 基于支持向量回归和分类的多视图人脸检测与识别
Yongmin Li, S. Gong, H. Liddell
{"title":"Support vector regression and classification based multi-view face detection and recognition","authors":"Yongmin Li, S. Gong, H. Liddell","doi":"10.1109/AFGR.2000.840650","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840650","url":null,"abstract":"A support vector machine-based multi-view face detection and recognition framework is described. Face detection is carried out by constructing several detectors, each of them in charge of one specific view. The symmetrical property of face images is employed to simplify the complexity of the modelling. The estimation of head pose, which is achieved by using the support vector regression technique, provides crucial information for choosing the appropriate face detector. This helps to improve the accuracy and reduce the computation in multi-view face detection compared to other methods. For video sequences, further computational reduction can be achieved by using a pose change smoothing strategy. When face detectors find a face in frontal view, a support vector machine-based multi-class classifier is activated for face recognition. All the above issues are integrated under a support vector machine framework. Test results on four video sequences are presented, among them the detection rate is above 95%, recognition accuracy is above 90%, average pose estimation error is around 10/spl deg/, and the full detection and recognition speed is up to 4 frames/second on a Pentium II 300 PC.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131521216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 276
Comparative evaluation of face sequence matching for content-based video access 基于内容的视频访问人脸序列匹配的比较评价
S. Satoh
{"title":"Comparative evaluation of face sequence matching for content-based video access","authors":"S. Satoh","doi":"10.1109/AFGR.2000.840629","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840629","url":null,"abstract":"The paper presents a comparative evaluation of matching methods of face sequences obtained from actual videos. Face information is quite important in videos, especially in news programs, dramas, and movies. Accurate face sequence matching enables many multimedia applications including content-based face retrieval, automated face annotation, video authoring, etc. However, face sequences in videos are subject to variation in lighting condition, pose, facial expression, etc., which cause difficulty in face matching. In order to cope with this problem, several face sequence matching methods are proposed by extending face still image matching, traditional pattern recognition, and recent pattern recognition techniques. They are expected to be applicable to face sequences extracted from actual videos. The performance of these methods is evaluated as the accuracy of face sequence annotation using the methods. The accuracy is evaluated using a considerable amount of actual drama videos. The evaluation results reveal merits and demerits of these methods, and indicate future research directions of face matching for videos.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116972306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Learning-based approach to real time tracking and analysis of faces 基于学习的人脸实时跟踪和分析方法
Vinay P. Kumar, T. Poggio
{"title":"Learning-based approach to real time tracking and analysis of faces","authors":"Vinay P. Kumar, T. Poggio","doi":"10.1109/AFGR.2000.840618","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840618","url":null,"abstract":"This paper describes a trainable system capable of tracking faces and facial features like eyes and nostrils and estimating basic mouth features such as degrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on an optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124333615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Real-time multiple face detection using active illumination 使用主动照明的实时多人脸检测
C. Morimoto, M. Flickner
{"title":"Real-time multiple face detection using active illumination","authors":"C. Morimoto, M. Flickner","doi":"10.1109/AFGR.2000.840605","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840605","url":null,"abstract":"This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro-reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual-mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi-face detector and a feature correlation tracker. The estimated position of the face is used to control a pan-tilt servo mechanism in real-time, that moves the camera to keep the tracked face always centered in the image.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122065988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Detection and tracking of facial features in real time using a synergistic approach of spatio-temporal models and generalized Hough-transform techniques 利用时空模型和广义霍夫变换技术的协同方法实时检测和跟踪面部特征
A. Schubert
{"title":"Detection and tracking of facial features in real time using a synergistic approach of spatio-temporal models and generalized Hough-transform techniques","authors":"A. Schubert","doi":"10.1109/AFGR.2000.840621","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840621","url":null,"abstract":"The proposed algorithm requires the description of the facial features as 3D-polygons (optionally extended by additional intensity information) which are assembled in a 3D-model of the head provided for in separate data files. Detection is achieved by using a special implementation of the generalized Hough transform (GHT) for which the forms are generated by projecting the 3D-model into the image plane. In the initialization phase a comparatively wide range of relative positions and attitudes between head and camera has to be tested for. Aiming for illumination-independence, only information about the sign of the difference between the expected intensities on both sides of the edge of the polygons may be additionally used in the GHT. Once a feature is found, further search for the remaining features can be restricted by the use of the 3D-model. The detection of a minimum number of features starts the tracking phase which is performed by using an extended Kalman filter (EKF) and assuming a first- or second-order dynamical model for the state variables describing the position and the attitude of the head. Synergistic advantages between GHT and EKF can be realized since the EKF and the projection into the image plane yield a rather good prediction of the forms to be detected by the GHT. This reduces considerably the search space in the image and in the parameter space. On the other hand the GHT offers a solution to the matching problem between image and object features. During the tracking phase the GHT can be further enhanced by monitoring the actual intensities along the edges of the polygons, their assignment to the corresponding 3D-object features, and their use for feature selection during the accumulation process. The algorithm runs on a dual Pentium II 333 MHz with a cycle time of 40 ms in real time.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129818713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信