Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems最新文献

筛选
英文 中文
Robust facial feature point detection under nonlinear illuminations 非线性光照下鲁棒人脸特征点检测
J. Lai, P. Yuen, Wen Chen, S. Lao, M. Kawade
{"title":"Robust facial feature point detection under nonlinear illuminations","authors":"J. Lai, P. Yuen, Wen Chen, S. Lao, M. Kawade","doi":"10.1109/RATFG.2001.938927","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938927","url":null,"abstract":"Addresses the problem of facial feature point detection under different lighting conditions. Our goal is to develop an efficient detection algorithm, which is suitable for practical applications. The problems that we need to overcome include (1) high detection accuracy, (2) low computational time and (3) nonlinear illumination. An algorithm is developed and reported in the paper. One of the key factors affecting the performance of feature point detection is the accuracy in locating face boundary. To solve this problem, we propose to make use of skin color, lip color and also the face boundary information. The basic idea to overcome the nonlinear illumination is that, each person shares the same/similar facial primitives, such as two eyes, one nose and one mouth. So the binary images of each person should be similar. Again, if a binary image (with appropriate thresholding) is obtained from the gray scale image, the facial feature points can also be detection easily. To achieve this, we propose to use the integral optical density (IOD) on face region. We propose to use the average IOD to detect feature windows. As all the above-mentioned techniques are simple and efficient, the proposed method is computationally effective and suitable for practical applications. 743 images from the Omron database with different facial expressions, different glasses and different hairstyle captured indoor and outdoor have been used to evaluate the proposed method and the detection accuracy is around 86%. The computational time in Pentium III 750 MHz using matlab for implementation is less than 7 seconds.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129190157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Nonlinear mapping from multi-view face patterns to a Gaussian distribution in a low dimensional space 从多视图人脸模式到低维空间高斯分布的非线性映射
Stan Z. Li, Rong Xiao, ZeYu Li, HongJiang Zhang
{"title":"Nonlinear mapping from multi-view face patterns to a Gaussian distribution in a low dimensional space","authors":"Stan Z. Li, Rong Xiao, ZeYu Li, HongJiang Zhang","doi":"10.1109/RATFG.2001.938909","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938909","url":null,"abstract":"We investigate into a nonlinear mapping by which multi-view face patterns in the input space are mapped into invariant points in a low dimensional feature space. The invariance to both illumination and view is achieved in two-stages. First, a nonlinear mapping from the input space to a low dimensional feature space is learned from multi-view face examples to achieve illumination invariance. The illumination invariant feature points of face patterns across views are on a curve parameterized by the view parameter, and the view parameter of a face pattern can be estimated from the location of the feature point on the curve by using least squares fit. Then the second nonlinear mapping, which is from the illumination invariant feature space to another feature space of the same dimension, is performed to achieve invariance to both illumination and view. This amounts to do a normalization based on the view estimate. By the two stage nonlinear mapping, multi-view face patterns are mapped to a zero mean Gaussian distribution in the latter feature space. Properties of the nonlinear mappings and the Gaussian face distribution are explored and supported by experiments.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"500 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120869564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A stabilized adaptive appearance changes model for 3D head tracking 一种用于三维头部跟踪的稳定自适应外观变化模型
Eisuke Adachi, Takio Kurita, N. Otsu
{"title":"A stabilized adaptive appearance changes model for 3D head tracking","authors":"Eisuke Adachi, Takio Kurita, N. Otsu","doi":"10.1109/RATFG.2001.938928","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938928","url":null,"abstract":"A simple method is presented for 3D head pose estimation and tracking in monocular image sequences. A generic geometric model is used. The initialization consists of aligning the perspective projection of the geometric model with the subjects head in the initial image. After the initialization, the gray levels from the initial image are mapped onto the visible side of the head model to form a textured object. Only a limited number of points on the object is used allowing real-time performance even on low-end computers. The appearance changes caused by movement in the complex light conditions of a real scene present a big problem for fitting the textured model to the data from new images. Having in mind real human-computer interfaces we propose a simple adaptive appearance changes model that is updated by the measurements from the new images. To stabilize the model we constrain it to some neighborhood of the initial gray values. The neighborhood is defined using some simple heuristics.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Learning visual models of social engagement 学习社会参与的视觉模型
B. Singletary, Thad Starner
{"title":"Learning visual models of social engagement","authors":"B. Singletary, Thad Starner","doi":"10.1109/RATFG.2001.938923","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938923","url":null,"abstract":"We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives \"social engagement,\" i.e., when the wearer begins to interact with other individuals. Our experimental system proved >90% accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed. Recognizing social engagement with a user's wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132722803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Sample-based synthesis of talking heads 基于样本的说话头合成
H. Graf, E. Cosatto
{"title":"Sample-based synthesis of talking heads","authors":"H. Graf, E. Cosatto","doi":"10.1109/RATFG.2001.938903","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938903","url":null,"abstract":"Synthesizing photo-realistic talking heads is a challenging problem, and so far all attempts using conventional computer graphics produced heads with a distinctly synthetic look. In order to look credible, a head must show a picture-perfect appearance, natural head movements, and good lip-sound synchronization. We use sample-based graphics to achieve more photo-realistic appearances than what is possible with the traditional approach of 3D modeling and texture mapping. For sample-based graphics, first parts of faces are cut from recorded images and are scored in a database. New sequences are then synthesized by integrating such parts into whole faces. With sufficient recorded data this approach produces by far the most naturally looking speech articulation. We integrate 3D modeling with the sample-based technique in order to enhance its flexibility. This allows, for example, showing the head over a much wider range of orientations.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using the active appearance algorithm for face and facial feature tracking 利用主动外观算法对人脸和面部特征进行跟踪
Jörgen Ahlberg
{"title":"Using the active appearance algorithm for face and facial feature tracking","authors":"Jörgen Ahlberg","doi":"10.1109/RATFG.2001.938912","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938912","url":null,"abstract":"This paper describes a system for tracking a face and its facial features in an input video sequence using the active appearance algorithm. The algorithm adapts a wireframe model to the face in each frame, and the adaptation parameters are converted to MPEG-4 facial animation parameters. The results are promising, and it is our conclusion that we should continue on this track in our task to create a real-time model-based coder.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131004971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Real-time stereo tracking of multiple moving heads 多个移动头部的实时立体跟踪
R. Luo, Yan Guo
{"title":"Real-time stereo tracking of multiple moving heads","authors":"R. Luo, Yan Guo","doi":"10.1109/RATFG.2001.938910","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938910","url":null,"abstract":"Tracking a number of persons moving in a cluttered scene with occlusions is an important issue in computer vision. In this paper we present RealTrack, a system for simultaneously tracking of multiple moving heads in real time for real world applications. It leverages on depth information to alleviate the influence of shadows and to disambiguate occlusions by depth ordering. Augmented by rotation insensitive head-shoulder contour models together with an adaptive tracking algorithm, our system can robustly track people in various conditions such as crossing, gathering, scattering, and re-appearing in a direction different from the moving direction before occlusion. We also employ dynamic background update method, which makes our system suitable for long time surveillance under slow lighting changes and disturbances such as objects going in and out of the scene.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117079167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Efficient real-time face tracking in wavelet subspace 基于小波子空间的高效实时人脸跟踪
R. S. Feris, Roberto M. Cesar, V. Kruger
{"title":"Efficient real-time face tracking in wavelet subspace","authors":"R. S. Feris, Roberto M. Cesar, V. Kruger","doi":"10.1109/RATFG.2001.938919","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938919","url":null,"abstract":"We present a new method for visual face tracking that is carried out in wavelet subspace. First, a wavelet representation for the face template is created, which spans a low-dimensional subspace of the image space. The video sequence frames where the face is tracked are then orthogonally projected into this low-dimensional subspace. This can be done efficiently through a small number of applications of the wavelet filters. All further computations are performed in wavelet subspace, which is isomorphic to the image subspace spanned by the sets of wavelets in the representation. Robustness with respect to facial expression and affine deformations, as well as the efficiency of our method, are demonstrated in various experiments.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133783139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Reconstruction of movies of facial expressions 面部表情电影的重建
G. Moiza, A. Tal, I. Shimshoni, D. Barnett, Y. Moses
{"title":"Reconstruction of movies of facial expressions","authors":"G. Moiza, A. Tal, I. Shimshoni, D. Barnett, Y. Moses","doi":"10.1109/RATFG.2001.938904","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938904","url":null,"abstract":"We present a technique for reconstructing facial movies, given a small number of real images and a few parameters for the in-between images. The parameters can be automatically extracted by a tracking system. This scheme can also be used for creating realistic facial animations and for compression of facial movies. The in-between images are produced without ever generating a 3D model of the face. Since facial motions due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial optical flow. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Head gestures for computer control 头部手势用于计算机控制
R. Kjeldsen
{"title":"Head gestures for computer control","authors":"R. Kjeldsen","doi":"10.1109/RATFG.2001.938911","DOIUrl":"https://doi.org/10.1109/RATFG.2001.938911","url":null,"abstract":"This paper explores the ways in which head gestures can be applied to the user interface. Four categories of gestural task are considered: pointing, continuous control, spatial selection and symbolic selection. For each category, the problem is examined in the abstract, focusing on human factors and an analysis of the task, then solutions are presented which take into consideration sensing constraints and computational efficiency. A hybrid pointer control algorithm is described that is better suited for facial pointing than either pure rate control or pure position control approaches. Variations of the algorithm are described for scrolling and selection tasks. The primary contribution is to address a full range of interactive head gestures using a consistent approach which focuses as much on user and task constraints as on sensing considerations.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123419172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信