Nefissa Khiari Hili, Christophe Montagne, S. Lelandais, K. Hamrouni
{"title":"Quality dependent multimodal fusion of face and iris biometrics","authors":"Nefissa Khiari Hili, Christophe Montagne, S. Lelandais, K. Hamrouni","doi":"10.1109/IPTA.2016.7820954","DOIUrl":null,"url":null,"abstract":"Although iris is known as the most accurate and face as the most accepted in biometrics, these distinct modalities encounter variability in data in real-world applications. Such limitation can be overcome by a multimodal system based on both traits. Additionally, by conditioning the multimodal fusion on quality, useful information can be extracted from lower quality measures rather than rejecting them out of hand. This paper suggests a dynamic weighted sum fusion that exploits an iris occlusion-based quality metric while combining unimodal scores. Instead of incorporating the quality of the gallery and probe images separately, a single quality metric for each gallery-probe comparison was used. Two strategies for integrating this metric into score-level fusion were explored. Experiments on the IV2 multimodal database including multiple variabilities proved that the proposed method improves some best current non quality-based fusion schemes by more than 30% in terms of Equal Error Rates.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPTA.2016.7820954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Although iris is known as the most accurate and face as the most accepted in biometrics, these distinct modalities encounter variability in data in real-world applications. Such limitation can be overcome by a multimodal system based on both traits. Additionally, by conditioning the multimodal fusion on quality, useful information can be extracted from lower quality measures rather than rejecting them out of hand. This paper suggests a dynamic weighted sum fusion that exploits an iris occlusion-based quality metric while combining unimodal scores. Instead of incorporating the quality of the gallery and probe images separately, a single quality metric for each gallery-probe comparison was used. Two strategies for integrating this metric into score-level fusion were explored. Experiments on the IV2 multimodal database including multiple variabilities proved that the proposed method improves some best current non quality-based fusion schemes by more than 30% in terms of Equal Error Rates.