人脸三维生物识别走向移动:探索便携式深度传感器在人脸识别中的应用

Weronika Gutfeter, A. Pacut
{"title":"人脸三维生物识别走向移动:探索便携式深度传感器在人脸识别中的应用","authors":"Weronika Gutfeter, A. Pacut","doi":"10.1109/CYBConf.2015.7175983","DOIUrl":null,"url":null,"abstract":"This paper presents an acquisition procedure and method of processing spatial images for face recognition with the use of a novel type of scanning device, namely mobile depth sensor Structure. Depth sensors, often called RGBD cameras, are able to deliver 3D images with a frame rate 30-60 frames per second, however they have relatively low resolution and a high level of noise. This kind of data is compared here with a high quality scans enrolled by the structural light scanner, for which the acquisition time is approximately 1.5 s for a single image, and which - because of its size - cannot be classified as a portable device. The purpose of this work was to find the method that will allow us to extract spatial features from mobile data sources analyzed here only in a static context. We transform the 3D data into local surface features and then into vectors of unified length by use of the Moving Least Squares method applied to a predefined grid of points on a reference cylinder. The feature matrices were calculated for various image features, and used in PCA analysis. Finally, the verification errors were calculated and compared to those obtained for stationary devices. The results show that single-image mobile sensor images lead to the results inferior to those of stationary sensors. However, we suggest a dynamic depth stream processing as the next step in the evolution of the described method. The presented results show that by including multi-frame processing into our method, it is likely to gain the accuracy similar to those obtained for a stationary device under controlled laboratory conditions.","PeriodicalId":177233,"journal":{"name":"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Face 3D biometrics goes mobile: Searching for applications of portable depth sensor in face recognition\",\"authors\":\"Weronika Gutfeter, A. Pacut\",\"doi\":\"10.1109/CYBConf.2015.7175983\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an acquisition procedure and method of processing spatial images for face recognition with the use of a novel type of scanning device, namely mobile depth sensor Structure. Depth sensors, often called RGBD cameras, are able to deliver 3D images with a frame rate 30-60 frames per second, however they have relatively low resolution and a high level of noise. This kind of data is compared here with a high quality scans enrolled by the structural light scanner, for which the acquisition time is approximately 1.5 s for a single image, and which - because of its size - cannot be classified as a portable device. The purpose of this work was to find the method that will allow us to extract spatial features from mobile data sources analyzed here only in a static context. We transform the 3D data into local surface features and then into vectors of unified length by use of the Moving Least Squares method applied to a predefined grid of points on a reference cylinder. The feature matrices were calculated for various image features, and used in PCA analysis. Finally, the verification errors were calculated and compared to those obtained for stationary devices. The results show that single-image mobile sensor images lead to the results inferior to those of stationary sensors. However, we suggest a dynamic depth stream processing as the next step in the evolution of the described method. The presented results show that by including multi-frame processing into our method, it is likely to gain the accuracy similar to those obtained for a stationary device under controlled laboratory conditions.\",\"PeriodicalId\":177233,\"journal\":{\"name\":\"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CYBConf.2015.7175983\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBConf.2015.7175983","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

本文介绍了一种新型的扫描设备——移动深度传感器结构,用于人脸识别的空间图像的采集程序和处理方法。深度传感器,通常被称为RGBD相机,能够以每秒30-60帧的帧率提供3D图像,但它们的分辨率相对较低,噪声水平较高。这种数据与结构光扫描仪记录的高质量扫描进行了比较,结构光扫描仪的获取时间约为1.5秒,并且由于其尺寸,不能归类为便携式设备。这项工作的目的是找到一种方法,使我们能够从静态环境中分析的移动数据源中提取空间特征。我们使用移动最小二乘法将三维数据转换为局部表面特征,然后将其转换为统一长度的向量,该方法应用于参考柱面上的预定义点网格。计算了各种图像特征的特征矩阵,并将其用于主成分分析。最后,计算了验证误差,并与固定装置的验证误差进行了比较。结果表明,单图像移动传感器图像导致的结果不如固定传感器图像。然而,我们建议动态深度流处理作为所描述方法发展的下一步。结果表明,通过将多帧处理纳入我们的方法,可以获得与受控实验室条件下固定装置相似的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Face 3D biometrics goes mobile: Searching for applications of portable depth sensor in face recognition
This paper presents an acquisition procedure and method of processing spatial images for face recognition with the use of a novel type of scanning device, namely mobile depth sensor Structure. Depth sensors, often called RGBD cameras, are able to deliver 3D images with a frame rate 30-60 frames per second, however they have relatively low resolution and a high level of noise. This kind of data is compared here with a high quality scans enrolled by the structural light scanner, for which the acquisition time is approximately 1.5 s for a single image, and which - because of its size - cannot be classified as a portable device. The purpose of this work was to find the method that will allow us to extract spatial features from mobile data sources analyzed here only in a static context. We transform the 3D data into local surface features and then into vectors of unified length by use of the Moving Least Squares method applied to a predefined grid of points on a reference cylinder. The feature matrices were calculated for various image features, and used in PCA analysis. Finally, the verification errors were calculated and compared to those obtained for stationary devices. The results show that single-image mobile sensor images lead to the results inferior to those of stationary sensors. However, we suggest a dynamic depth stream processing as the next step in the evolution of the described method. The presented results show that by including multi-frame processing into our method, it is likely to gain the accuracy similar to those obtained for a stationary device under controlled laboratory conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信