{"title":"Iris tracking with feature free contours","authors":"D. Hansen, A. Pece","doi":"10.1109/AMFG.2003.1240845","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240845","url":null,"abstract":"An active contour method is presented and applied to robust iris tracking. The main strength of the method is that the contour model avoids explicit \"feature\" detection: contours are simply assumed to remove statistical dependencies on opposite sides of the contour. The contour model is utilized in particle filtering together with the EM algorithm. The method shows robustness to light changes and camera defocusing, and makes it possible to use off-the-shelf hardware for gaze-based interaction.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129964755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape and appearance models of talking faces for model-based tracking","authors":"M. Odisio, G. Bailly","doi":"10.1109/AMFG.2003.1240836","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240836","url":null,"abstract":"We present a system that can recover and track the 3D speech movements of a speaker's face for each image of a monocular sequence. A speaker-specific face model is used for tracking: model parameters are extracted from each image by an analysis-by-synthesis loop. To handle both the individual specificities of the speaker's articulation and the complexity of the facial deformations during speech, speaker-specific models of the face 3D geometry and appearance are built from real data. The geometric model is linearly controlled by only six articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We evaluate these different appearance models with ground truth data.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124321202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facing the future","authors":"A. Pentland","doi":"10.1109/AMFG.2003.1240816","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240816","url":null,"abstract":"","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequential Monte Carlo tracking of body parameters in a sub-space","authors":"T. Moeslund, E. Granum","doi":"10.1109/AMFG.2003.1240828","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240828","url":null,"abstract":"In recent years sequential Monte Carlo (SMC) methods have been applied to handle some of the problems inherent to model-based tracking. Two issues regarding SMC are investigated in the context of estimating the 3D pose of the human arm. Firstly, we investigate how to apply a subspace to representing the pose of a human arm more efficiently, i.e., reducing the dimensionality. Secondly, we investigate how to apply a local method to estimated the maximum a posteriori (MAP). The former issue is based on combining a screw axis representation with the position of the hand in the image. The latter issue is handled by applying a method based on maximising a proximity function, to estimate the MAP. We find that both the subspace and the proximity function are sound strategies and that they are an improvement over the current SMC-methods.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121897652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rank constrained recognition under unknown illuminations","authors":"S. Zhou, R. Chellappa","doi":"10.1109/AMFG.2003.1240818","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240818","url":null,"abstract":"Recognition under illumination variations is a challenging problem. The key is to successfully separate the illumination source from the observed appearance. Once separated, what remains is invariant to illuminant and appropriate for recognition. Most current efforts employ a Lambertian reflectance model with varying albedo field ignoring both attached and cast shadows, but restrict themselves by using object-specific samples, which undesirably deprives them of recognizing new objects not in the training samples. Using rank constraints on the albedo and the surface normal, we accomplish illumination separation in a more general setting, e.g., with class-specific samples via a factorization approach. In addition, we handle shadows (both attached and cast ones) by treating them as missing values, and resolve the ambiguities in the factorization method by enforcing integrability. As far as recognition is concerned, a bootstrap set which is just a collection of two-dimensional image observations can be utilized to avoid the explicit requirement that three-dimensional information be available. Our approaches produce good recognition results as shown in our experiments using the PIE database.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130901485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longbin Chen, Lei Zhang, Yuxiao Hu, M. Li, H. Zhang
{"title":"Head pose estimation using Fisher Manifold learning","authors":"Longbin Chen, Lei Zhang, Yuxiao Hu, M. Li, H. Zhang","doi":"10.1109/AMFG.2003.1240844","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240844","url":null,"abstract":"Here, we propose a new learning strategy for head pose estimation. Our approach uses nonlinear interpolation to estimate the head pose using the learning result from face images of two head poses. Advantage of our method to regression method is that it only requires training images of two head poses and better generalization ability. It outperforms existed methods, such as regression and multiclass classification method, on both synthesis and real face images. Average head pose estimation error of yaw rotation is about 4/sup 0/, which proves that our method is effective in head pose estimation.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient active appearance model for real-time head and facial feature tracking","authors":"F. Dornaika, J. Ahlberg","doi":"10.1109/AMFG.2003.1240840","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240840","url":null,"abstract":"We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131793242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is face recognition in pictures affected by the center of projection?","authors":"C. Liu","doi":"10.1109/AMFG.2003.1240824","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240824","url":null,"abstract":"Recognition of unfamiliar faces can be severely impaired when two pictures of a face are taken from different camera distances. This effect of perspective transformation may be predicted either by a model-based theory, for which the impairment shows a difficulty in constructing the 3D surface geometry, or by an image-based theory, for which it results from dissimilarity between 2D images. To test these hypotheses, we tested recognition performance in which face images were viewed either at their center of projection (the camera position) or at other distances. Based on past findings [A.L. Nicholls, et al. (1993)], [T. Yang, et al. (1999)] the same position should help correct marginal distortions of shapes due to large perspective convergence and hence facilitate reconstruction of 3D shape from perspective cues. However, the results showed little support for this prediction. The lack of 3D shape reconstruction and the effects of image similarity provided favorable evidence for the image-based theory of face recognition.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134284608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of 3D hand shape and posture from image sequences for sign language recognition","authors":"H. Fillbrandt, Suat Akyol, K. Kraiss","doi":"10.1109/AMFG.2003.1240841","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240841","url":null,"abstract":"We propose a novel method for extracting natural hand parameters from monocular image sequences. The purpose is to improve a vision-based sign language recognition system by providing detail information about the finger constellation and the 3D hand posture. Therefore, the hand is modelled by a set of 2D appearance models, each representing a limited variation range of 3D hand shape and posture. The single models are linked to each other according to the natural neighbourhood of the corresponding hand status. During an image sequence, necessary model transitions are executed towards one of the current neighbour models. The natural hand parameters are calculated from the shape and texture parameters of the current model, using a relation estimated by linear regression. The method is robust against large differences between subsequent frames and also against poor image quality. It can be implemented in real-time and offers good properties to handle occlusion and partly missing image information.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128278496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face modeling and recognition in 3-D","authors":"G. Medioni, R. Waupotitsch","doi":"10.1109/AMFG.2003.1240848","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240848","url":null,"abstract":"We demonstrate a complete and automatic system to perform face authentication by analysis of 3D facial shape. In this live demonstration of the system, the subject is first enrolled, and given a unique identifier. Subsequently, the user's identity is verified by providing the reference identifier. Our approach is to be contrasted with traditional face recognition methods, which compare pictures of faces. We also analyzed the image quality requirements in order to generate a good quality 3D reconstruction from stereo.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121667002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}