Video-based online face recognition using identity surfaces

Yongmin Li, S. Gong, H. Liddell
{"title":"Video-based online face recognition using identity surfaces","authors":"Yongmin Li, S. Gong, H. Liddell","doi":"10.1109/RATFG.2001.938908","DOIUrl":null,"url":null,"abstract":"A multi-view dynamic face model is designed to extract the shape-and-pose-free texture patterns effaces. The model provides a precise correspondence to the task of recognition since the 3D shape information is used to warp the multi-view faces onto the model mean shape in frontal-view. The identity surface of each subject is constructed in a discriminant feature space from a sparse set of face texture patterns, or more practically, from one or more learning sequences containing the face of the subject. Instead of matching templates or estimating multi-modal density functions, face recognition can be performed by computing the pattern distances to the identity surfaces or trajectory distances between the object and model trajectories. Experimental results depict that this approach provides an accurate recognition rate while using trajectory distances achieves a more robust performance since the trajectories encode the spatio-temporal information and contain accumulated evidence about the moving faces in a video input.","PeriodicalId":355094,"journal":{"name":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RATFG.2001.938908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33

Abstract

A multi-view dynamic face model is designed to extract the shape-and-pose-free texture patterns effaces. The model provides a precise correspondence to the task of recognition since the 3D shape information is used to warp the multi-view faces onto the model mean shape in frontal-view. The identity surface of each subject is constructed in a discriminant feature space from a sparse set of face texture patterns, or more practically, from one or more learning sequences containing the face of the subject. Instead of matching templates or estimating multi-modal density functions, face recognition can be performed by computing the pattern distances to the identity surfaces or trajectory distances between the object and model trajectories. Experimental results depict that this approach provides an accurate recognition rate while using trajectory distances achieves a more robust performance since the trajectories encode the spatio-temporal information and contain accumulated evidence about the moving faces in a video input.
基于视频的在线人脸识别使用身份表面
设计了一种多视图动态人脸模型,用于提取无形状、无姿态纹理图案的人脸。该模型利用三维形状信息将多视图人脸翘曲到正面视图的模型平均形状上,为识别任务提供了精确的对应关系。每个受试者的身份面是在一个判别特征空间中构建的,该特征空间来自一组稀疏的面部纹理模式,或者更实际地说,来自一个或多个包含受试者面部的学习序列。代替匹配模板或估计多模态密度函数,人脸识别可以通过计算到身份曲面的模式距离或对象与模型轨迹之间的轨迹距离来完成。实验结果表明,该方法提供了准确的识别率,而使用轨迹距离实现了更强大的性能,因为轨迹编码了时空信息,并包含了视频输入中移动人脸的累积证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信