7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

筛选
英文 中文
Adding holistic dimensions to a facial composite system 为面部合成系统增加整体尺寸
C. Frowd, V. Bruce, A. McIntyre, P. Hancock
{"title":"Adding holistic dimensions to a facial composite system","authors":"C. Frowd, V. Bruce, A. McIntyre, P. Hancock","doi":"10.1109/FGR.2006.20","DOIUrl":"https://doi.org/10.1109/FGR.2006.20","url":null,"abstract":"Facial composites are typically constructed by witnesses to crime by describing a suspect's face and then selecting facial features from a kit of parts. Unfortunately, when produced in this way, composites are very poorly identified. In contrast, there is mounting evidence that other, more recognition-based approaches can produce a much better likeness of a suspect. With the EvoFIT system, for example, witnesses are presented with sets of complete faces and a composite is `evolved' through a process of selection and breeding. The current work serves to augment EvoFIT by developing a set of psychologically useful `knobs' that allow faces to be manipulated along dimensions such as facial weight, masculinity, and age. These holistic dimensions were implemented by increasing the size and variability of the underlying face model and obtaining perceptual ratings so that the space could be suitably vectorised. Two evaluations suggested that the new dimensions were operating appropriately","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A new look at filtering techniques for illumination invariance in automatic face recognition 人脸自动识别中光照不变性滤波技术的新研究
Ognjen Arandjelovic, R. Cipolla
{"title":"A new look at filtering techniques for illumination invariance in automatic face recognition","authors":"Ognjen Arandjelovic, R. Cipolla","doi":"10.1109/FGR.2006.14","DOIUrl":"https://doi.org/10.1109/FGR.2006.14","url":null,"abstract":"Illumination invariance remains the most researched, yet the most challenging aspect of automatic face recognition. In this paper we propose a novel, general recognition framework for efficient matching of individual face images, sets or sequences. The framework is based on simple image processing filters that compete with unprocessed greyscale input to yield a single matching score between individuals. It is shown how the discrepancy between illumination conditions between novel input and the training data set can be estimated and used to weigh the contribution of two competing representations. We describe an extensive empirical evaluation of the proposed method on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our algorithm consistently demonstrated a dramatic performance improvement over traditional filtering approaches. We demonstrate a reduction of 50-75% in recognition error rates, the best performing method-filter combination correctly recognizing 96% of the individuals","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130228942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A realtime shrug detector 一个实时耸肩探测器
Huazhong Ning, T. Han, Yuxiao Hu, ZhenQiu Zhang, Yun Fu, Thomas S. Huang
{"title":"A realtime shrug detector","authors":"Huazhong Ning, T. Han, Yuxiao Hu, ZhenQiu Zhang, Yun Fu, Thomas S. Huang","doi":"10.1109/FGR.2006.15","DOIUrl":"https://doi.org/10.1109/FGR.2006.15","url":null,"abstract":"A realtime system for shrug detection is discussed in this paper. The system is automatically initialized by a face detector based on Ada-boost [P. Viola and M. Jones, May 2004]. After frontal face is localized by the face detector, shoulder position is detected by fitting a parabola to the nearby horizontal edges using weighted Hough transform [K. Sugawara, 1997]. Since shrug is an action which is defined not only by the distance between face and shoulder but also the relative temporal-spatial changing between them, we propose a parameterizing scheme using two different parabolas, named as \"stable parabola\" (SP) and \"transient parabola\" (TP) to characterize the action shrug. Stable parabola represents the mean shoulder position over a long time duration, while transient parabola represents the mean shoulder position of a very short time duration. By using this scheme (only 6 dimensions), we avoid the high dimensional representation of the temporal process-shrug, and therefore make the realtime implementation possible. The shrug detector is then trained in the parameter space using Fisher discriminant analysis (FDA). The experiments show that the proposed shrug detector is able to not only detect the shrug action correctly and efficiently (in realtime), but also tolerate the large in-class variation caused by different subject, different action speed, illumination, partial occlusion, and background clutter. So the proposed realtime shrug detector is promising in video analysis under an uncontrolled environment","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"27 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132116544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning sparse features in granular space for multi-view face detection 学习颗粒空间中的稀疏特征用于多视图人脸检测
Chang Huang, H. Ai, Yuan Li, S. Lao
{"title":"Learning sparse features in granular space for multi-view face detection","authors":"Chang Huang, H. Ai, Yuan Li, S. Lao","doi":"10.1109/FGR.2006.70","DOIUrl":"https://doi.org/10.1109/FGR.2006.70","url":null,"abstract":"In this paper, a novel sparse feature set is introduced into the Adaboost learning framework for multi-view face detection (MVFD), and a learning algorithm based on heuristic search is developed to select sparse features in granular space. Compared with Haar-like features, sparse features are more generic and powerful to characterize multi-view face pattern that is more diverse and asymmetric than frontal face pattern. In order to cut down search space to a manageable size, we propose a multi-scaled search algorithm that is about 6 times faster than brute-force search. With this method, a MVFD system is implemented that covers face pose changes over +/-45deg rotation in plane (RIP) and +/-90deg rotation off plane (ROP). Experiments over well-know test set are reported to show its high performance in both accuracy and speed","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133867121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Cascaded classification of gender and facial expression using active appearance models 使用主动外观模型的性别和面部表情级联分类
Yunus Saatci, C. Town
{"title":"Cascaded classification of gender and facial expression using active appearance models","authors":"Yunus Saatci, C. Town","doi":"10.1109/FGR.2006.29","DOIUrl":"https://doi.org/10.1109/FGR.2006.29","url":null,"abstract":"This paper presents an approach to recognising the gender and expression of face images by means of active appearance models (AAM). Features extracted by a trained AAM are used to construct support vector machine (SVM) classifiers for 4 elementary emotional states (happy, angry, sad, neutral). These classifiers are arranged into a cascade structure in order to optimise overall recognition performance. Furthermore, it is shown how performance can be further improved by first classifying the gender of the face images using an SVM trained in a similar manner. Both gender-specific expression classification and expression-specific gender classification cascades are considered, with the former yielding better recognition performance. We conclude that there are gender-specific differences in the appearance of facial expressions that can be exploited for automated recognition, and that cascades are an efficient and effective way of performing multi-class recognition of facial expressions","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129516394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 159
Video-based face recognition evaluation in the CHIL project - Run 1 基于视频的人脸识别评估CHIL项目-运行1
H. K. Ekenel, Aristodemos Pnevmatikakis
{"title":"Video-based face recognition evaluation in the CHIL project - Run 1","authors":"H. K. Ekenel, Aristodemos Pnevmatikakis","doi":"10.1109/FGR.2006.110","DOIUrl":"https://doi.org/10.1109/FGR.2006.110","url":null,"abstract":"This paper describes the video-based face recognition evaluation performed under the CHIL project and the systems that participated to it, along with the obtained first year results. The evaluation methodology comprises a specially built database of videos and an evaluation protocol. Two complete automatic face detection and recognition systems from two academic institutions participated to the evaluation. For comparison purposes, a baseline system is also developed using well-known methods for face detection and recognition","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Robust distance measures for face-recognition supporting revocable biometric tokens 支持可撤销生物特征令牌的人脸识别鲁棒距离度量
T. Boult
{"title":"Robust distance measures for face-recognition supporting revocable biometric tokens","authors":"T. Boult","doi":"10.1109/FGR.2006.94","DOIUrl":"https://doi.org/10.1109/FGR.2006.94","url":null,"abstract":"This paper explores a form of robust distance measures for biometrics and presents experiments showing that, when applied per \"class\" they can dramatically improve the accuracy of face recognition. We \"robustify'' many distance measures included in the CSU face-recognition toolkit, and apply them to PCA, LDA and EBGM. The resulting performance puts each of these algorithms, for the FERET datasets tested, on par with commercial face recognition results. Unlike passwords, biometric signatures cannot be changed or revoked. This paper shows how the robust distance measures introduce can be used for secure robust revocable biometrics. The technique produces what we call Biotopestrade, which provide public-key cryptographic security, supports matching in encoded form, cannot be linked across different databases and are revocable. Biotopes support a robust distance measure computed on the encoded form that is proven not to decrease, and that may potentially increase, accurately. The approach is demonstrated, to improve performance beyond the already impressive gains from the robust distance measure","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132861995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Tracking using dynamic programming for appearance-based sign language recognition 基于外观的手语识别的动态规划跟踪
P. Dreuw, Thomas Deselaers, David Rybach, Daniel Keysers, H. Ney
{"title":"Tracking using dynamic programming for appearance-based sign language recognition","authors":"P. Dreuw, Thomas Deselaers, David Rybach, Daniel Keysers, H. Ney","doi":"10.1109/FGR.2006.107","DOIUrl":"https://doi.org/10.1109/FGR.2006.107","url":null,"abstract":"We present a novel tracking algorithm that uses dynamic programming to determine the path of target objects and that is able to track an arbitrary number of different objects. The traceback method used to track the targets avoids taking possibly wrong local decisions and thus reconstructs the best tracking paths using the whole observation sequence. The tracking method can be compared to the nonlinear time alignment in automatic speech recognition (ASR) and it can analogously be integrated into a hidden Markov model based recognition process. We show how the method can be applied to the tracking of hands and the face for automatic sign language recognition","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114163299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Making recognisable faces 做出可识别的面孔
D. Chatting
{"title":"Making recognisable faces","authors":"D. Chatting","doi":"10.1109/FGR.2006.76","DOIUrl":"https://doi.org/10.1109/FGR.2006.76","url":null,"abstract":"When delivering visual content on multiple devices and services, faces can often become unrecognisable. This paper draws together research from across the cognitive psychology literature to argue that faces should be treated as a special case when rendering content. Where available, we suggest methods by which recognition can be imp roved within the constraints of the device and service. Firstly, we review the psychology literature to discuss recognition performance when manipulating the face's scale, colour palette, orientation and motion. Secondly, we consider how characteristics of the individual faces can aide or hinder recognition and how caricature may be applied, especially within crowds, to improve it. Thirdly, we show how context can make even the most abstract faces recognisable. Fourthly, we highlight the challenges of making a good portrait, beyond the criteria of simply being recognisable. Finally, we begin to describe a framework for automatically rendering faces 'smartly', such that they will be most recognisable given the device and service of which they are a part","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Vision Based Interface for Local Collaborative Music Synthesis 基于视觉的本地协同音乐合成界面
João Carreira, P. Peixoto
{"title":"A Vision Based Interface for Local Collaborative Music Synthesis","authors":"João Carreira, P. Peixoto","doi":"10.1109/FGR.2006.16","DOIUrl":"https://doi.org/10.1109/FGR.2006.16","url":null,"abstract":"The computer is an ubiquitous element of modern society, nonetheless, human computer interaction is still rather inflexible. Particularly in local collaborative environments, like office meetings, the property that the mouse and keyboard exhibit of being a gateway for the individual to act upon a workspace, makes local computer mediated collaboration uncomfortable, as users have to time-share their actions upon that workspace. We present in this paper a novel, very fast, vision based interface that allows multiple users to interact simultaneously with a single computer by performing hand gestures, which are filmed by a static video camera. This interface attempts to continuously recognize predefined postures and movements using a view-dependent method. We also present A.C.O, an application which receives input from the vision based interface and allows users around a table to collaborate playing synthesized music instruments by moving their hands","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125678126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信