Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.最新文献

筛选
英文 中文
Information fusion in face identification 人脸识别中的信息融合
Wenchao Zhang, S. Shan, Wen Gao, Yizheng Chang, B. Cao, Peng Yang
{"title":"Information fusion in face identification","authors":"Wenchao Zhang, S. Shan, Wen Gao, Yizheng Chang, B. Cao, Peng Yang","doi":"10.1109/ICPR.2004.1334686","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334686","url":null,"abstract":"Information fusion of multi-modal biometrics has attracted much attention in recent years. However, this paper focuses on the information fusion in single models, that is, the face biometric. Two different representation methods, gray level intensity and Gabor feature, are exploited for fusion. We study the fusion problem in face recognition at both the face representation level and the confidence level. At the representation level, both the PCA feature fusion and the LDA feature fusion are considered, while at the confidence level, the sum rule and the product rule are investigated. We show through experiments on FERET face database and our own face database that appropriate information fusion can improve the performance of face recognition and verification. This suggests that gray level intensity and Gabor feature compensate for each other, based on the feasible fusion.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"21 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133170877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Spin images for retrieval of 3D objects by local and global similarity 旋转图像检索三维对象的局部和全局相似度
J. Assfalg, A. Bimbo, P. Pala
{"title":"Spin images for retrieval of 3D objects by local and global similarity","authors":"J. Assfalg, A. Bimbo, P. Pala","doi":"10.1109/ICPR.2004.1334675","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334675","url":null,"abstract":"The ever increasing availability of 3D models demands for tools supporting their effective and efficient management. Among these tools, those enabling content-based retrieval play a key role. In this paper, we present a novel approach to global and local content-based retrieval of 3D objects that is based on spin images. Spin images are used to derive a view-independent description of both database and query objects. A set of spin images is first created for each object and the parts it is composed of; then, a descriptor is evaluated for each spin image in the set; clustering is performed on the set of image-based descriptors of each object to achieve a compact representation. Experimental results are presented for a test database of about 300 models, showing the effectiveness of retrieval for both object and part similarity.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121685742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Pose invariant affect analysis using thin-plate splines 基于薄板样条的位姿不变影响分析
J. McCall, M. Trivedi
{"title":"Pose invariant affect analysis using thin-plate splines","authors":"J. McCall, M. Trivedi","doi":"10.1109/ICPR.2004.1334688","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334688","url":null,"abstract":"This paper introduces a method for pose-invariant facial affect analysis and a real-time system for facial affect analysis using this method. The method is centered on developing a feature vector that is more robust to rigid body movements while retaining information important to facial affect analysis. This feature vector is produced using thin-plate splines to extract affine transformations independently from nonlinear transformations quickly and efficiently. The affine portion can be used to describe the rigid body motion because planar motions in a perspective projection can be approximated by an affine transformation. Removing the affine portion and using the nonlinear portion of the thin-plate spline warping provides information on the nonlinear motion caused by facial affects. The real-time system developed using this method is composed of three main components: facial landmark tracking, feature vector extraction, and affect classification. The system processes streaming video in real-time. Testing was performed to examine the invariance to rotation as well as subject independence of the system. Finally, its application in real-world environments is discussed.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Probabilistic combination of multiple modalities to detect interest 多模态的概率组合来检测兴趣
Ashish Kapoor, Rosalind W. Picard, Y. Ivanov
{"title":"Probabilistic combination of multiple modalities to detect interest","authors":"Ashish Kapoor, Rosalind W. Picard, Y. Ivanov","doi":"10.1109/ICPR.2004.1334690","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334690","url":null,"abstract":"This paper describes a new approach to combine multiple modalities and applies it to the problem of affect recognition. The problem is posed as a combination of classifiers in a probabilistic framework that naturally explains the concepts of experts and critics. Each channel of data has an expert associated that generates the beliefs about the correct class. Probabilistic models of error and the critics, which predict the performance of the expert on the current input, are used to combine the expert's beliefs about the correct class. The method is applied to detect the affective state of interest using information from the face, postures and task the subjects are performing. The classification using multiple modalities achieves a recognition accuracy of 67.8%, outperforming the classification using individual modalities. Further, the proposed combination scheme achieves the greatest reduction in error when compared with other classifier combination methods.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134495720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
Gesture recognition using temporal template based trajectories 基于时间模板轨迹的手势识别
Caifeng Shan, Yucheng Wei, Xianchao Qiu, T. Tan
{"title":"Gesture recognition using temporal template based trajectories","authors":"Caifeng Shan, Yucheng Wei, Xianchao Qiu, T. Tan","doi":"10.1109/ICPR.2004.1334687","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334687","url":null,"abstract":"In this paper, a novel approach of hand gesture recognition is proposed. The spatial-temporal trajectory of hand gesture is first tracked by the mean shift embedded particle filter (MSEPF), and then represented in a static image using temporal template. Hand gestures are recognized by a two-layer classifier, which is based on statistical shape and orientation analysis of such temporal template based trajectories (TTBT). Experimental results show that our algorithm has high recognition rate.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"6 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123740806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Facial image retrieval based on demographic classification 基于人口统计分类的人脸图像检索
Bo Wu, H. Ai, Chang Huang
{"title":"Facial image retrieval based on demographic classification","authors":"Bo Wu, H. Ai, Chang Huang","doi":"10.1109/ICPR.2004.1334677","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334677","url":null,"abstract":"In this paper, we propose a novel method for demographic classification and present an image retrieval system that can retrieve facial images by demographic information that includes gender, age and ethnicity. The demographic information is extracted from human faces by demographic classifiers that are learned from boosting Haar feature based look-up-table type weak classifiers. The image retrieval system consists of three modules, face detection, facial feature landmark extraction and demographic classification. Experimental results are reported to show its potential in the management of a large facial image database for online retrieval applications.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124927532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Visually steerable sound beam forming system based on face tracking and speaker array 基于人脸跟踪和扬声器阵列的可视声束形成系统
H. Mizoguchi, Y. Tamai, K. Shinoda, S. Kagami, K. Nagashima
{"title":"Visually steerable sound beam forming system based on face tracking and speaker array","authors":"H. Mizoguchi, Y. Tamai, K. Shinoda, S. Kagami, K. Nagashima","doi":"10.1109/ICPR.2004.1334692","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334692","url":null,"abstract":"This paper presents a novel human-machine interface, named invisible messenger. It integrates real time visual tracking of face and sound beam forming by speaker array. Direction towards a target person can be obtained by the face tracking on real-time. By continuously updating the sound beam direction with the face tracking output, the system can keep transmitting sounds towards the target person selectively, even if he or she moves around. Thus, it realizes remote whispering effect as if an invisible messenger were standing by him. Construction of a working system and actual measurement using the system prove the feasibility and effectiveness of the proposed idea.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Combining sensory and symbolic data for manipulative gesture recognition 结合感官和符号数据的操纵性手势识别
J. Fritsch, Nils Hofemann, G. Sagerer
{"title":"Combining sensory and symbolic data for manipulative gesture recognition","authors":"J. Fritsch, Nils Hofemann, G. Sagerer","doi":"10.1109/ICPR.2004.1334681","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334681","url":null,"abstract":"In this paper, we propose to recognize manipulative hand gestures by incorporating symbolic constraints in a particle filtering approach used for trajectory-based activity recognition. To this end, the notion of situational and spatial context of a gesture is introduced. This scene context is incorporated during the analysis of the trajectory data. A first evaluation in an office environment demonstrates the suitability of our approach. Different from purely trajectory-based approaches, our method recognizes manipulative gestures including the information which objects were manipulated.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129862409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Deformable geometry model matching by topological and geometric signatures 可变形的几何模型匹配的拓扑和几何签名
G. Tam, Rynson W. H. Lau, C. Ngo
{"title":"Deformable geometry model matching by topological and geometric signatures","authors":"G. Tam, Rynson W. H. Lau, C. Ngo","doi":"10.1109/ICPR.2004.1334676","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334676","url":null,"abstract":"In this paper, we present a novel method for efficient 3D model comparison. The method matches highly deformed models by comparing topological and geometric features. First, we propose \"bi-directional LSD analysis\" to locate reliable topological points and rings. Second, based on these points and rings, sets of bounded regions are extracted as topological features. Third, for each bounded region, we capture additional spatial location, curvature and area distribution as geometric data. Fourth, to model the topological importance of each bounded region, we capture its effective area as weight. By using \"earth mover distance\" as a distance measure between two models, our method can achieve a high accuracy in our retrieval experiment, with precision of 0.53 even at recall rate of 1.0.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117291982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Robust real-time detection, tracking, and pose estimation of faces in video streams 视频流中人脸的鲁棒实时检测、跟踪和姿态估计
Kohsia S. Huang, M. Trivedi
{"title":"Robust real-time detection, tracking, and pose estimation of faces in video streams","authors":"Kohsia S. Huang, M. Trivedi","doi":"10.1109/ICPR.2004.1334689","DOIUrl":"https://doi.org/10.1109/ICPR.2004.1334689","url":null,"abstract":"Robust human face analysis has been recognized as a crucial part in intelligent systems. In this paper, we present the development of a computational framework for robust detection, tracking, and pose estimation of faces captured by video arrays. We discuss the development of a multi-primitive skin-tone and edge-based detection module embedded in a tracking module for efficient and robust face detection and tracking. A continuous density HMM based pose estimation is developed for an accurate estimate of the face orientation motions. Experimental evaluations of these algorithms suggest the validity of the proposed framework and its computational modules.","PeriodicalId":335842,"journal":{"name":"Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114594298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信