{"title":"Fusion of IR and visible light modalities for face recognition","authors":"P. Buyssens, M. Revenu, O. Lepetit","doi":"10.1109/BTAS.2009.5339031","DOIUrl":null,"url":null,"abstract":"We present a low resolution face recognition technique based on a special type of convolutional neural network which is trained to extract facial features from face images and project them onto a low-dimensional space. The network is trained to reconstruct a reference image chosen beforehand, and it has been applied in visible and infrared light. Since the learning phase is achieved separately for the two modalities, the projections, and then the new spaces, are uncorrelated for the two networks. However, by normalizing the results of these two non-linear approaches, we can merge them according to a measure of saliency computed dynamically. We experimentally show that our approach obtain good results in terms of precision and robustness, especially on new and unseen subjects.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BTAS.2009.5339031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
We present a low resolution face recognition technique based on a special type of convolutional neural network which is trained to extract facial features from face images and project them onto a low-dimensional space. The network is trained to reconstruct a reference image chosen beforehand, and it has been applied in visible and infrared light. Since the learning phase is achieved separately for the two modalities, the projections, and then the new spaces, are uncorrelated for the two networks. However, by normalizing the results of these two non-linear approaches, we can merge them according to a measure of saliency computed dynamically. We experimentally show that our approach obtain good results in terms of precision and robustness, especially on new and unseen subjects.