{"title":"Hidden Markov Models for feature-level fusion of biometrics on mobile devices","authors":"M. Gofman, S. Mitra, Nicholas Smith","doi":"10.1109/AICCSA.2016.7945755","DOIUrl":null,"url":null,"abstract":"Although biometrics have forayed into the mobile world, most current approaches rely on a single biometric modality. This limits their recognition accuracy in uncontrolled conditions. For example, performance of face and voice recognition systems may suffer in poorly lit and noisy settings, respectively. Integration of identifying information from multiple biometric modalities can help solve this problem; high-quality identifying information in one modality can compensate for the absence of such information in a modality affected by uncontrolled conditions. In this paper, we present a novel multimodal biometric scheme that uses Hidden Markov Models to consolidate data from face and voice biometrics at the feature level. An implementation on the Samsung Galaxy S5 (SG5) phone using a dataset of face and voice samples captured using SG5 in real-world operating conditions, yielded 4.18% and 9.71% higher recognition accuracy than face and voice single-modality systems, respectively.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICCSA.2016.7945755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Although biometrics have forayed into the mobile world, most current approaches rely on a single biometric modality. This limits their recognition accuracy in uncontrolled conditions. For example, performance of face and voice recognition systems may suffer in poorly lit and noisy settings, respectively. Integration of identifying information from multiple biometric modalities can help solve this problem; high-quality identifying information in one modality can compensate for the absence of such information in a modality affected by uncontrolled conditions. In this paper, we present a novel multimodal biometric scheme that uses Hidden Markov Models to consolidate data from face and voice biometrics at the feature level. An implementation on the Samsung Galaxy S5 (SG5) phone using a dataset of face and voice samples captured using SG5 in real-world operating conditions, yielded 4.18% and 9.71% higher recognition accuracy than face and voice single-modality systems, respectively.