2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)最新文献

筛选
英文 中文
Iris tracking with feature free contours 虹膜跟踪与特征自由轮廓
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240845
D. Hansen, A. Pece
{"title":"Iris tracking with feature free contours","authors":"D. Hansen, A. Pece","doi":"10.1109/AMFG.2003.1240845","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240845","url":null,"abstract":"An active contour method is presented and applied to robust iris tracking. The main strength of the method is that the contour model avoids explicit \"feature\" detection: contours are simply assumed to remove statistical dependencies on opposite sides of the contour. The contour model is utilized in particle filtering together with the EM algorithm. The method shows robustness to light changes and camera defocusing, and makes it possible to use off-the-shelf hardware for gaze-based interaction.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129964755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Shape and appearance models of talking faces for model-based tracking 基于模型跟踪的说话脸的形状和外观模型
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240836
M. Odisio, G. Bailly
{"title":"Shape and appearance models of talking faces for model-based tracking","authors":"M. Odisio, G. Bailly","doi":"10.1109/AMFG.2003.1240836","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240836","url":null,"abstract":"We present a system that can recover and track the 3D speech movements of a speaker's face for each image of a monocular sequence. A speaker-specific face model is used for tracking: model parameters are extracted from each image by an analysis-by-synthesis loop. To handle both the individual specificities of the speaker's articulation and the complexity of the facial deformations during speech, speaker-specific models of the face 3D geometry and appearance are built from real data. The geometric model is linearly controlled by only six articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We evaluate these different appearance models with ground truth data.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124321202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Facing the future 面向未来
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240816
A. Pentland
{"title":"Facing the future","authors":"A. Pentland","doi":"10.1109/AMFG.2003.1240816","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240816","url":null,"abstract":"","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 240
Sequential Monte Carlo tracking of body parameters in a sub-space 子空间中体参数的顺序蒙特卡罗跟踪
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240828
T. Moeslund, E. Granum
{"title":"Sequential Monte Carlo tracking of body parameters in a sub-space","authors":"T. Moeslund, E. Granum","doi":"10.1109/AMFG.2003.1240828","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240828","url":null,"abstract":"In recent years sequential Monte Carlo (SMC) methods have been applied to handle some of the problems inherent to model-based tracking. Two issues regarding SMC are investigated in the context of estimating the 3D pose of the human arm. Firstly, we investigate how to apply a subspace to representing the pose of a human arm more efficiently, i.e., reducing the dimensionality. Secondly, we investigate how to apply a local method to estimated the maximum a posteriori (MAP). The former issue is based on combining a screw axis representation with the position of the hand in the image. The latter issue is handled by applying a method based on maximising a proximity function, to estimate the MAP. We find that both the subspace and the proximity function are sound strategies and that they are an improvement over the current SMC-methods.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121897652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Rank constrained recognition under unknown illuminations 未知光照下的秩约束识别
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240818
S. Zhou, R. Chellappa
{"title":"Rank constrained recognition under unknown illuminations","authors":"S. Zhou, R. Chellappa","doi":"10.1109/AMFG.2003.1240818","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240818","url":null,"abstract":"Recognition under illumination variations is a challenging problem. The key is to successfully separate the illumination source from the observed appearance. Once separated, what remains is invariant to illuminant and appropriate for recognition. Most current efforts employ a Lambertian reflectance model with varying albedo field ignoring both attached and cast shadows, but restrict themselves by using object-specific samples, which undesirably deprives them of recognizing new objects not in the training samples. Using rank constraints on the albedo and the surface normal, we accomplish illumination separation in a more general setting, e.g., with class-specific samples via a factorization approach. In addition, we handle shadows (both attached and cast ones) by treating them as missing values, and resolve the ambiguities in the factorization method by enforcing integrability. As far as recognition is concerned, a bootstrap set which is just a collection of two-dimensional image observations can be utilized to avoid the explicit requirement that three-dimensional information be available. Our approaches produce good recognition results as shown in our experiments using the PIE database.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130901485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Head pose estimation using Fisher Manifold learning 基于Fisher流形学习的头部姿态估计
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240844
Longbin Chen, Lei Zhang, Yuxiao Hu, M. Li, H. Zhang
{"title":"Head pose estimation using Fisher Manifold learning","authors":"Longbin Chen, Lei Zhang, Yuxiao Hu, M. Li, H. Zhang","doi":"10.1109/AMFG.2003.1240844","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240844","url":null,"abstract":"Here, we propose a new learning strategy for head pose estimation. Our approach uses nonlinear interpolation to estimate the head pose using the learning result from face images of two head poses. Advantage of our method to regression method is that it only requires training images of two head poses and better generalization ability. It outperforms existed methods, such as regression and multiclass classification method, on both synthesis and real face images. Average head pose estimation error of yaw rotation is about 4/sup 0/, which proves that our method is effective in head pose estimation.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Efficient active appearance model for real-time head and facial feature tracking 实时头部和面部特征跟踪的高效主动外观模型
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240840
F. Dornaika, J. Ahlberg
{"title":"Efficient active appearance model for real-time head and facial feature tracking","authors":"F. Dornaika, J. Ahlberg","doi":"10.1109/AMFG.2003.1240840","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240840","url":null,"abstract":"We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131793242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Is face recognition in pictures affected by the center of projection? 图像中的人脸识别是否受投影中心的影响?
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240824
C. Liu
{"title":"Is face recognition in pictures affected by the center of projection?","authors":"C. Liu","doi":"10.1109/AMFG.2003.1240824","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240824","url":null,"abstract":"Recognition of unfamiliar faces can be severely impaired when two pictures of a face are taken from different camera distances. This effect of perspective transformation may be predicted either by a model-based theory, for which the impairment shows a difficulty in constructing the 3D surface geometry, or by an image-based theory, for which it results from dissimilarity between 2D images. To test these hypotheses, we tested recognition performance in which face images were viewed either at their center of projection (the camera position) or at other distances. Based on past findings [A.L. Nicholls, et al. (1993)], [T. Yang, et al. (1999)] the same position should help correct marginal distortions of shapes due to large perspective convergence and hence facilitate reconstruction of 3D shape from perspective cues. However, the results showed little support for this prediction. The lack of 3D shape reconstruction and the effects of image similarity provided favorable evidence for the image-based theory of face recognition.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134284608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Extraction of 3D hand shape and posture from image sequences for sign language recognition 从图像序列中提取三维手部形状和姿势用于手语识别
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240841
H. Fillbrandt, Suat Akyol, K. Kraiss
{"title":"Extraction of 3D hand shape and posture from image sequences for sign language recognition","authors":"H. Fillbrandt, Suat Akyol, K. Kraiss","doi":"10.1109/AMFG.2003.1240841","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240841","url":null,"abstract":"We propose a novel method for extracting natural hand parameters from monocular image sequences. The purpose is to improve a vision-based sign language recognition system by providing detail information about the finger constellation and the 3D hand posture. Therefore, the hand is modelled by a set of 2D appearance models, each representing a limited variation range of 3D hand shape and posture. The single models are linked to each other according to the natural neighbourhood of the corresponding hand status. During an image sequence, necessary model transitions are executed towards one of the current neighbour models. The natural hand parameters are calculated from the shape and texture parameters of the current model, using a relation estimated by linear regression. The method is robust against large differences between subsequent frames and also against poor image quality. It can be implemented in real-time and offers good properties to handle occlusion and partly missing image information.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128278496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Face modeling and recognition in 3-D 三维人脸建模与识别
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240848
G. Medioni, R. Waupotitsch
{"title":"Face modeling and recognition in 3-D","authors":"G. Medioni, R. Waupotitsch","doi":"10.1109/AMFG.2003.1240848","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240848","url":null,"abstract":"We demonstrate a complete and automatic system to perform face authentication by analysis of 3D facial shape. In this live demonstration of the system, the subject is first enrolled, and given a unique identifier. Subsequently, the user's identity is verified by providing the reference identifier. Our approach is to be contrasted with traditional face recognition methods, which compare pictures of faces. We also analyzed the image quality requirements in order to generate a good quality 3D reconstruction from stereo.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121667002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信