Multi-Level Liveness Verification for Face-Voice Biometric Authentication

G. Chetty, M. Wagner
{"title":"Multi-Level Liveness Verification for Face-Voice Biometric Authentication","authors":"G. Chetty, M. Wagner","doi":"10.1109/BCC.2006.4341615","DOIUrl":null,"url":null,"abstract":"In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates (EER) for different types of replay and synthesis attacks.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"114 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"80","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BCC.2006.4341615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 80

Abstract

In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates (EER) for different types of replay and synthesis attacks.
人脸-语音生物识别认证的多级活体验证
在本文中,我们提出了多级活体验证(MLLV)框架的细节,该框架旨在实现一个安全的人脸-语音生物识别认证系统,该系统可以阻止不同类型的音频和视频重播攻击。该框架基于特征提取和多模态融合方法,揭示了说话人脸的语音和人脸信息之间的静态和动态关系,并允许多级安全。使用VidTIMIT、UCBN和AVOZES三种不同的语音语料库进行的实验表明,针对不同类型的重播和合成攻击,系统在DET曲线和等错误率(EER)方面的性能有了显著提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信