2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)最新文献

筛选
英文 中文
Keystroke dynamics recognition based on personal data: A comparative experimental evaluation implementing reproducible research 基于个人数据的击键动力学识别:实现可重复性研究的对比实验评价
A. Morales, Mario Falanga, Julian Fierrez, Carlo Sansone, J. Ortega-Garcia
{"title":"Keystroke dynamics recognition based on personal data: A comparative experimental evaluation implementing reproducible research","authors":"A. Morales, Mario Falanga, Julian Fierrez, Carlo Sansone, J. Ortega-Garcia","doi":"10.1109/BTAS.2015.7358772","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358772","url":null,"abstract":"This work proposes a new benchmark for keystroke dynamics recognition on the basis of fully reproducible research. Instead of traditional authentication approaches based on complex passwords, we propose a novel keystroke recognition based on typing patterns from personal data. We present a new database made up with the keystroke patterns of 63 users and 7560 samples. The proposed approach eliminates the necessity to memorize complex passwords (something that we know) by replacing them by personal data (something that we are). The results encourage to further explore this new application scenario and the availability of data and source code represent a new valuable resource for the research community.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129726058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Bin-based weak classifier fusion of iris and face biometrics 基于人脸识别的弱分类器虹膜融合
Di Miao, Man Zhang, Haiqing Li, Zhenan Sun, T. Tan
{"title":"Bin-based weak classifier fusion of iris and face biometrics","authors":"Di Miao, Man Zhang, Haiqing Li, Zhenan Sun, T. Tan","doi":"10.1109/BTAS.2015.7358749","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358749","url":null,"abstract":"Both high accuracy of iris biometrics and friendly interface of face recognition are important issues to a biometric recognition system. So an open problem is how to combine iris and face biometrics for reliable personal identification. This paper proposes a bin-based weak classifier fusion method for Multibiometrics of Iris and Face. The matching scores of iris and face image patches are partitioned into multiple bins so that the weak classifiers are learned on the bins. Such a non-linear score mapping is simple and efficient but it can discover detailed and distinctive information hidden in matching scores. So that pattern classification performance of the matching scores can be significantly improved. In addition, an ensemble learning method based on boosting is used to select the most discriminant and robust bin-based weak classifiers for identity verification. The excellent performance on the CASIA-Iris-Distance demonstrates the advantages of the proposed method over other multibiometric fusion methods.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125843231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Iris imaging in visible spectrum using white LED 利用白光LED进行可见光谱虹膜成像
K. Raja, Ramachandra Raghavendra, C. Busch
{"title":"Iris imaging in visible spectrum using white LED","authors":"K. Raja, Ramachandra Raghavendra, C. Busch","doi":"10.1109/BTAS.2015.7358769","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358769","url":null,"abstract":"Iris recognition in the visible spectrum has many challenging aspects. Especially, for subjects with dark iris color, which is caused by higher melanin pigmentation and collagen fibrils, the pattern is not clearly observable under visible light. Thus, the verification performance is generally lowered due to limited texture visibility in the captured iris samples. In this work, we propose a novel method of employing a white light-emitting-diode (LED) to obtain high-quality iris images with detailed texture. To evaluate the proposed set-up with LED light, we have acquired a new database of dark iris images comprising of 62 unique iris instances with ten samples each that were captured in different sessions. The database is acquired using three different smartphones - iPhone 5S, Nokia Lumia 1020 and Samsung Active S4. We also provide a benchmark of the proposed method with conventional to Near-Infra-Red (NIR) images, which are available for a subset of the database. Extensive experiments were carried out using five different well-established iris recognition algorithms and one commercial-of-the-shelf algorithm. They demonstrate the reliable performance of the proposed image capturing setup with GMR of 91.01% at FMR = 0.01% indicating the applicability in real-life authentication scenarios.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115000531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Finger vein liveness detection using motion magnification 用运动放大技术检测手指静脉活动性
Ramachandra Raghavendra, M. Avinash, S. Marcel, C. Busch
{"title":"Finger vein liveness detection using motion magnification","authors":"Ramachandra Raghavendra, M. Avinash, S. Marcel, C. Busch","doi":"10.1109/BTAS.2015.7358762","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358762","url":null,"abstract":"Finger vein recognition has emerged as an accurate and reliable biometric modality that was deployed in various security applications. However, the use of finger vein recognition also indicated its vulnerability to presentation attacks (or direct attacks). In this work, we present a novel algorithm to identify the liveness of the finger vein characteristic that is presented to the sensor. The core idea of the proposed approach is to magnify the blood flow through the finger vein to measure its liveness. To this extent, we employ the Eulerian Video Magnification (EVM) approach to enhance the motion of the blood in the recorded finger vein video. Next, we further process the magnified video to extract the motion-based features using optical flow to identify the finger vein artefacts. Extensive experiments are carried out on a relatively large database that is comprised of normal presentations vein videos from 300 unique finger instances corresponding to 100 subjects. The finger vein artefact database is captured by printing 300 real (or normal) presentation image of the finger vein sample on a high-quality paper using two different kinds of printers namely laser and inkjet. Extensive comparative evaluation with four different well-established state-of-the-art schemes demonstrated the efficacy of the proposed scheme.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"535 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
A deep neural network for audio-visual person recognition 一种用于视听人物识别的深度神经网络
Mohammad Rafiqul Alam, Bennamoun, R. Togneri, Ferdous Sohel
{"title":"A deep neural network for audio-visual person recognition","authors":"Mohammad Rafiqul Alam, Bennamoun, R. Togneri, Ferdous Sohel","doi":"10.1109/BTAS.2015.7358754","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358754","url":null,"abstract":"This paper presents applications of special types of deep neural networks (DNNs) for audio-visual biometrics. A common example is the DBN-DNN that uses the generative weights of deep belief networks (DBNs) to initialize the feature detecting layers of deterministic feed forward DNNs. In this paper, we propose the DBM-DNN that uses the generative weights of deep Boltzmann machines (DBMs) for initialization of DNNs. Then, a softmax layer is added on top and the DNNs are trained discriminatively. Our experimental results show that lower error rates can be achieved using the DBM-DNN compared to the support vector machine (SVM), linear regression-based classifier (LRC) and the DBN-DNN. Experiments were carried out on two publicly available audio-visual datasets: the VidTIMIT and MOBIO.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132817899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Presentation attack detection using Laplacian decomposed frequency response for visible spectrum and Near-Infra-Red iris systems 基于拉普拉斯分解频率响应的可见光谱和近红外虹膜系统呈现攻击检测
K. Raja, Ramachandra Raghavendra, C. Busch
{"title":"Presentation attack detection using Laplacian decomposed frequency response for visible spectrum and Near-Infra-Red iris systems","authors":"K. Raja, Ramachandra Raghavendra, C. Busch","doi":"10.1109/BTAS.2015.7358790","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358790","url":null,"abstract":"Biometrics systems are being challenged at the sensor level using artefact presentation such as printed artefacts or electronic screen attacks. In this work, we propose a novel technique to detect the artefact iris images by decomposing the images into Laplacian pyramids of various scales and obtain frequency responses in different orientations. The obtained features are classified using a support vector machine with a polynomial kernel. Further, we extend the same technique with majority voting rule to provide the decision on artefact detection for video based iris recognition in the visible spectrum. The proposed technique is evaluated on the newly created visible spectrum iris video database and also Near-Infra-Red (NIR) images. The newly constructed visible spectrum iris video database is specifically tailored to study the vulnerability of presentation attacks on visible spectrum iris recognition using videos on a smartphone. The newly constructed database is referred as `Presentation Attack Video Iris Database' (PAVID) and consists of 152 unique iris patterns obtained from two different smartphone - iPhone 5S and Nokia Lumia 1020. The proposed technique has provided an Attack Classificiation Error Rate (ACER) of 0.64% on PAVID database and 1.37% on LiveDet iris dataset validating the robustness and applicability of the proposed presentation attack detection (PAD) algorithm in real life scenarios.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133281580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Pokerface: Partial order keeping and energy repressing method for extreme face illumination normalization 极值面照度归一化的偏序保持和能量抑制方法
Felix Juefei-Xu, M. Savvides
{"title":"Pokerface: Partial order keeping and energy repressing method for extreme face illumination normalization","authors":"Felix Juefei-Xu, M. Savvides","doi":"10.1109/BTAS.2015.7358787","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358787","url":null,"abstract":"We propose a new method called the Pokerface for extreme face illumination normalization. The Pokerface is a two-phase approach. It first aims at maximizing the minimum gap between adjacently-valued pixels while keeping the partial ordering of the pixels in the face image under extreme illumination condition, an intuitive effort based on order theory to unveil the underlying structure of a dark image. This optimization can be formulated as a feasibility search problem and can be efficiently solved by linear programming. It then smooths the intermediate representation by repressing the energy of the gradient map. The smoothing step is carried out by total variation minimization and sparse approximation. The illumination normalized faces using our proposed Pokerface not only exhibit very high fidelity against neutrally illuminated face, but also allow for a significant improvement in face verification experiments using even the simplest classifier. Simultaneously achieving high level of faithfulness and expressiveness is very rare among other methods. These conclusions are drawn after benchmarking our algorithm against 22 prevailing illumination normalization techniques on both the CMU Multi-PIE database and Extended YaleB database that are widely adopted for face illumination problems.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133592764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
On the vulnerability of speaker verification to realistic voice spoofing 说话人验证对现实语音欺骗的脆弱性研究
Serife Seda Kucur Ergunay, E. Khoury, Alexandros Lazaridis, S. Marcel
{"title":"On the vulnerability of speaker verification to realistic voice spoofing","authors":"Serife Seda Kucur Ergunay, E. Khoury, Alexandros Lazaridis, S. Marcel","doi":"10.1109/BTAS.2015.7358783","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358783","url":null,"abstract":"Automatic speaker verification (ASV) systems are subject to various kinds of malicious attacks. Replay, voice conversion and speech synthesis attacks drastically degrade the performance of a standard ASV system by increasing its false acceptance rates. This issue raised a high level of interest in the speech research community where the possible voice spoofing attacks and their related countermeasures have been investigated. However, much less effort has been devoted in creating realistic and diverse spoofing attack databases that foster researchers to correctly evaluate their countermeasures against attacks. The existing studies are not complete in terms of types of attacks, and often difficult to reproduce because of unavailability of public databases. In this paper we introduce the voice spoofing data-set of AVspoof, a public audio-visual spoofing database. AVspoof includes ten realistic spoofing threats generated using replay, speech synthesis and voice conversion. In addition, we provide a set of experimental results that show the effect of such attacks on current state-of-the-art ASV systems.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 115
Towards fitting a 3D dense facial model to a 2D image: A landmark-free approach 拟合三维密集面部模型到二维图像:无地标方法
Yuhang Wu, Xiang Xu, S. Shah, I. Kakadiaris
{"title":"Towards fitting a 3D dense facial model to a 2D image: A landmark-free approach","authors":"Yuhang Wu, Xiang Xu, S. Shah, I. Kakadiaris","doi":"10.1109/BTAS.2015.7358799","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358799","url":null,"abstract":"Head pose estimation helps to align a 3D face model to a 2D image, which is critical to research requiring dense 2D-to-2D or 3D-to-2D correspondence. Traditional pose estimation relies strongly on the accuracy of landmarks, so it is sensitive to missing or incorrect landmarks. In this paper, we propose a landmark-free approach to estimate the pose projection matrix. The method can be used to estimate this matrix in unconstrained scenarios and we demonstrate its effectiveness through multiple head pose estimation experiments.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"30 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114116027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Human and algorithm performance on the PaSC face Recognition Challenge 人类和算法在PaSC人脸识别挑战中的表现
P. Phillips, Matthew Q. Hill, Jake A. Swindle, A. O’Toole
{"title":"Human and algorithm performance on the PaSC face Recognition Challenge","authors":"P. Phillips, Matthew Q. Hill, Jake A. Swindle, A. O’Toole","doi":"10.1109/BTAS.2015.7358765","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358765","url":null,"abstract":"Face recognition by machines has improved substantially in the past decade and now is at a level that compares favorably with humans for frontal faces acquired by digital single lens reflex cameras. We expand the comparison between humans and algorithms to still images and videos taken with digital point and shoot cameras. The data used for this comparison are from the Point and Shoot Face Recognition Challenge (PaSC). For videos, human performance was compared with the four top performers in the Face and Gesture 2015 Person Recognition Evaluation. In the literature, there are two methods for computing human performance: aggregation and fusion. We show that the fusion method produces higher performance estimates. We report performance for two levels of difficulty: challenging and extremely-difficult. Our results provide additional evidence that human performance shines relative to algorithms on extremely-difficult comparisons. To improve the community's understanding of the state of human and algorithm performance, we update the cross-modal performance analysis in Phillips and O'Toole [22] with these new results.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124887130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信