2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)最新文献

筛选
英文 中文
Manifold of facial expression 面部表情的多样性
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240820
Ya Chang, Changbo Hu, M. Turk
{"title":"Manifold of facial expression","authors":"Ya Chang, Changbo Hu, M. Turk","doi":"10.1109/AMFG.2003.1240820","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240820","url":null,"abstract":"We propose the concept of manifold of facial expression based on the observation that images of a subject's facial expressions define a smooth manifold in the high dimensional image space. Such a manifold representation can provide a unified framework for facial expression analysis. We first apply active wavelet networks (AWN) on the image sequences for facial feature localization. To learn the structure of the manifold in the feature space derived by AWN, we investigated two types of embeddings from a high dimensional space to a low dimensional space: locally linear embedding (LLE) and Lipschitz embedding. Our experiments show that LLE is suitable for visualizing expression manifolds. After applying Lipschitz embedding, the expression manifold can be approximately considered as a super-spherical surface in the embedding space. For manifolds derived from different subjects, we propose a nonlinear alignment algorithm that keeps the semantic similarity of facial expression from different subjects on one generalized manifold. We also show that nonlinear alignment outperforms linear alignment in expression classification.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122634076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 139
Using similarity scores from a small gallery to estimate recognition performance for larger galleries 使用小型图库的相似度分数来估计大型图库的识别性能
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240830
A. Johnson, Jie Sun, A. Bobick
{"title":"Using similarity scores from a small gallery to estimate recognition performance for larger galleries","authors":"A. Johnson, Jie Sun, A. Bobick","doi":"10.1109/AMFG.2003.1240830","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240830","url":null,"abstract":"We present a method to estimate recognition performance for large galleries of individuals using data from a significantly smaller gallery. This is achieved by mathematically modelling a cumulative match characteristic (CMC) curve. The similarity scores of the smaller gallery are used to estimate the parameters of the model. After the parameters are estimated, the rank 1 point of the modelled CMC curve is used as our measure of recognition performance. The rank 1 point (i.e.; nearest-neighbor) represents the probability of correctly identifying an individual from a gallery of a particular size; however, as gallery size increases, the rank 1 performance decays. Our model, without making any assumptions about the gallery distribution, replicates this effect, and allows us to estimate recognition performance as gallery size increases without needing to physically add more individuals to the gallery. This model is evaluated on face recognition techniques using a set of faces from the FERET database.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125240110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Absolute head pose estimation from overhead wide-angle cameras 从头顶广角摄像机估计绝对头部姿势
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240829
Ying-li Tian, L. Brown, J. Connell, Sharath Pankanti, A. Hampapur, A. Senior, R. Bolle
{"title":"Absolute head pose estimation from overhead wide-angle cameras","authors":"Ying-li Tian, L. Brown, J. Connell, Sharath Pankanti, A. Hampapur, A. Senior, R. Bolle","doi":"10.1109/AMFG.2003.1240829","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240829","url":null,"abstract":"Most surveillance cameras have a wide-angle field of view and are situated unobtrusively at overhead positions. For this type of application, head pose estimation is very challenging because of the limitations of the quality and resolution of the incoming data. In addition, even though the absolute head pose is constant, the head pose in camera view changes depending upon the location of head with respect the camera. We present a solution to estimate absolute coarse head pose for wide-angle overhead cameras by integrating 3D head position and pose information. The work involves image-based learning, pose correction based on 3D position, and real-time multicamera integration of low-resolution imagery. The system can be applied to an active face catalogger to obtain the best view of the face for surveillance, to customer relationship management to record behavior in retail stores or to virtual reality as an input device.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133870519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Fully automatic upper facial action recognition 全自动上面部动作识别
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240843
Ashish Kapoor, Yuan Qi, Rosalind W. Picard
{"title":"Fully automatic upper facial action recognition","authors":"Ashish Kapoor, Yuan Qi, Rosalind W. Picard","doi":"10.1109/AMFG.2003.1240843","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240843","url":null,"abstract":"We provide a new fully automatic framework to analyze facial action units, the fundamental building blocks of facial expression enumerated in Paul Ekman's facial action coding system (FACS). The action units examined here include upper facial muscle movements such as inner eyebrow raise, eye widening, and so forth, which combine to form facial expressions. Although prior methods have obtained high recognition rates for recognizing facial action units, these methods either use manually preprocessed image sequences or require human specification of facial features; thus, they have exploited substantial human intervention. We present a fully automatic method, requiring no such human specification. The system first robustly detects the pupils using an infrared sensitive camera equipped with infrared LEDs. For each frame, the pupil positions are used to localize and normalize eye and eyebrow regions, which are analyzed using PCA to recover parameters that relate to the shape of the facial features. These parameters are used as input to classifiers based on support vector machines to recognize upper facial action units and all their possible combinations. On a completely natural dataset with lots of head movements, pose changes and occlusions, the new framework achieved a recognition accuracy of 69.3% for each individual AU and an accuracy of 62.5% for all possible AU combinations. This framework achieves a higher recognition accuracy on the Cohn-Kanade AU-coded facial expression database, which has been previously used to evaluate other facial action recognition system.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114067798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Human body tracking with auxiliary measurements 带有辅助测量的人体跟踪
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240832
M. Lee, I. Cohen
{"title":"Human body tracking with auxiliary measurements","authors":"M. Lee, I. Cohen","doi":"10.1109/AMFG.2003.1240832","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240832","url":null,"abstract":"We present two techniques for improving human body tracking within the particle filtering scheme. Both techniques explore the use of auxiliary measurements. The first technique uses optical flow cues to improve the sampling distribution. The second technique involves the detection of individual body parts, namely the hand, head and torso; and using these detection results to provide additional inference on subsets of state parameters. This method enables the automatic initialization of state vector and allows recovering from tracking failures. These two methods improve the overall accuracy, efficiency and robustness of human body tracking as illustrated by the experimental results.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116711987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Component-based LDA method for face recognition with one training sample 基于组件的单训练样本人脸识别LDA方法
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240833
Jian Huang, P. Yuen, Wensheng Chen, J. Lai
{"title":"Component-based LDA method for face recognition with one training sample","authors":"Jian Huang, P. Yuen, Wensheng Chen, J. Lai","doi":"10.1109/AMFG.2003.1240833","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240833","url":null,"abstract":"Many face recognition algorithms/systems have been developed in the last decade and excellent performances are also reported when there is sufficient number of representative training samples. In many real-life applications, only one training sample is available. Under this situation, the performance of existing algorithms will be degraded dramatically or the formulation is incorrect, which in turn, the algorithm cannot be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples, but also consider the face detection localization error while training. After that, we employ a sub-space LDA method, which is tailor-made for small number of training samples, for the local feature projection to maximize the discrimination power. Finally, combining the contributions of each local feature draws the recognition decision. FERET database is used for evaluating the proposed method and results are encouraging.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Multi-modal face tracking using Bayesian network 基于贝叶斯网络的多模态人脸跟踪
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240835
Fang Liu, X. Lin, S. Li, Yuanchun Shi
{"title":"Multi-modal face tracking using Bayesian network","authors":"Fang Liu, X. Lin, S. Li, Yuanchun Shi","doi":"10.1109/AMFG.2003.1240835","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240835","url":null,"abstract":"We present a Bayesian network based multimodal fusion method for robust and real-time face tracking. The Bayesian network integrates a prior of second order system dynamics, and the likelihood cues from color, edge and face appearance. While different modalities have different confidence scales, we encode the environmental factors related to the confidences of modalities into the Bayesian network, and develop a Fisher discriminant analysis method for learning optimal fusion. The face tracker may track multiple faces under different poses. It is made up of two stages. First hypotheses are efficiently generated using a coarse-to-fine strategy; then multiple modalities are integrated in the Bayesian network to evaluate the posterior of each hypothesis. The hypothesis that maximizes a posterior (MAP) is selected as the estimate of the object state. Experimental results demonstrate the robustness and real-time performance of our face tracking approach.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126319619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Inference of human postures by classification of 3D human body shape 基于三维人体形态分类的人体姿态推断
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240827
I. Cohen, Hongxia Li
{"title":"Inference of human postures by classification of 3D human body shape","authors":"I. Cohen, Hongxia Li","doi":"10.1109/AMFG.2003.1240827","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240827","url":null,"abstract":"We describe an approach for inferring the body posture using a 3D visual-hull constructed from a set of silhouettes. We introduce an appearance-based, view-independent, 3D shape description for classifying and identifying human posture using a support vector machine. The proposed global shape description is invariant to rotation, scale and translation and varies continuously with 3D shape variations. This shape representation is used for training a support vector machine allowing the characterization of human body postures from the computed visual hull. The main advantage of the shape description is its ability to capture human shape variation allowing the identification of body postures across multiple people. The proposed method is illustrated on a set of video streams of body postures captured by four synchronous cameras.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 154
CSLDS: Chinese sign language dialog system 中文手语对话系统
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240850
Yiqiang Chen, Wen Gao, Gaolin Fang, Changshui Yang, Zhaoqi Wang
{"title":"CSLDS: Chinese sign language dialog system","authors":"Yiqiang Chen, Wen Gao, Gaolin Fang, Changshui Yang, Zhaoqi Wang","doi":"10.1109/AMFG.2003.1240850","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240850","url":null,"abstract":"We present a Chinese sign language dialog system (CSLDS) based on the technique of large vocabulary continuous Chinese sign language recognition (CSLR) and Chinese sign language synthesis (CSLS). This system can show the advance technology on gesture recognition and synthesis well and can apply to more powerful system combined with speech recognition and synthesis technology, which then can allow the convenient communication between deaf and hearing society.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"41 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123341572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A quantified study of facial asymmetry in 3D faces 三维人脸面部不对称性的量化研究
2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443) Pub Date : 2003-10-17 DOI: 10.1109/AMFG.2003.1240847
Yanxi Liu, J. Palmer
{"title":"A quantified study of facial asymmetry in 3D faces","authors":"Yanxi Liu, J. Palmer","doi":"10.1109/AMFG.2003.1240847","DOIUrl":"https://doi.org/10.1109/AMFG.2003.1240847","url":null,"abstract":"With the rapid development of 3D imaging technology, the wide usage of 3D surface information for research and applications is becoming a convenient reality. We focus on a quantified analysis of facial asymmetry of more than 100 3D human faces (individuals). We investigate whether facial asymmetry differs statistically significantly from a bilateral symmetry assumption, and the role of global and local facial asymmetry for gender discrimination.","PeriodicalId":388409,"journal":{"name":"2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123618732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信