{"title":"Fusion of face and visual speech information for identity verification","authors":"Longbin Lu, Xinman Zhang, Xuebin Xu","doi":"10.1109/ISPACS.2017.8266530","DOIUrl":null,"url":null,"abstract":"Fusion of multiple biometrie characteristics for identity verification has shown obvious merits in contrast to conventional systems based on unimodal biometric features. In this research, a new multimodal verification method is investigated by integrating face and visual speech information simultaneously. Different from face verification, the proposed scheme takes advantage of lip movement features in visual speech, which can significantly decrease the risk of being cheated by a fake face image. To accomplish the work, a Linearity Preserving Projection (LPP) transform and a Projection Local Spatiotemporal Descriptor (PLSD) are applied in the feature extraction for face and visual speech respectively. In order to combine the multisource biometric features, an Extreme Learning Machine (ELM) based fusion strategy is performed on the matching score level to generate a fused score for the final verification. Experiments conducted on the OuluVS database have shown that our proposed method can achieve very satisfying results.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS.2017.8266530","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fusion of face and visual speech information for identity verification
Fusion of multiple biometrie characteristics for identity verification has shown obvious merits in contrast to conventional systems based on unimodal biometric features. In this research, a new multimodal verification method is investigated by integrating face and visual speech information simultaneously. Different from face verification, the proposed scheme takes advantage of lip movement features in visual speech, which can significantly decrease the risk of being cheated by a fake face image. To accomplish the work, a Linearity Preserving Projection (LPP) transform and a Projection Local Spatiotemporal Descriptor (PLSD) are applied in the feature extraction for face and visual speech respectively. In order to combine the multisource biometric features, an Extreme Learning Machine (ELM) based fusion strategy is performed on the matching score level to generate a fused score for the final verification. Experiments conducted on the OuluVS database have shown that our proposed method can achieve very satisfying results.