Revisiting the Security of Biometric Authentication Systems Against Statistical Attacks

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Sohail Habib, Hassan Khan, Andrew Hamilton-Wright, Urs Hengartner
{"title":"Revisiting the Security of Biometric Authentication Systems Against Statistical Attacks","authors":"Sohail Habib, Hassan Khan, Andrew Hamilton-Wright, Urs Hengartner","doi":"https://dl.acm.org/doi/10.1145/3571743","DOIUrl":null,"url":null,"abstract":"<p>The uniqueness of behavioural biometrics (e.g., voice or keystroke patterns) has been challenged by recent works. Statistical attacks have been proposed that infer general population statistics and target behavioural biometrics against a particular victim. We show that despite their success, these approaches require several attempts for successful attacks against different biometrics due to the different nature of overlap in users’ behaviour for these biometrics. Furthermore, no mechanism has been proposed to date that detects statistical attacks. In this work, we propose a new hypervolumes-based statistical attack and show that unlike existing methods it: 1) is successful against a variety of biometrics; 2) is successful against more users; and 3) requires fewest attempts for successful attacks. More specifically, across five diverse biometrics, for the first attempt, on average our attack is 18 percentage points more successful than the second best (37% vs. 19%). Similarly, for the fifth attack attempt, on average our attack is 18 percentage points more successful than the second best (67% vs. 49%). We propose and evaluate a mechanism that can detect the more devastating statistical attacks. False rejects in biometric systems are common and by distinguishing statistical attacks from false rejects, our defence improves usability and security. The evaluation of the proposed detection mechanism shows its ability to detect on average 94% of the tested statistical attacks with an average probability of 3% to detect false rejects as a statistical attack. Given the serious threat posed by statistical attacks to biometrics that are used today (e.g., voice), our work highlights the need for defending against these attacks.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"15 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Privacy and Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3571743","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The uniqueness of behavioural biometrics (e.g., voice or keystroke patterns) has been challenged by recent works. Statistical attacks have been proposed that infer general population statistics and target behavioural biometrics against a particular victim. We show that despite their success, these approaches require several attempts for successful attacks against different biometrics due to the different nature of overlap in users’ behaviour for these biometrics. Furthermore, no mechanism has been proposed to date that detects statistical attacks. In this work, we propose a new hypervolumes-based statistical attack and show that unlike existing methods it: 1) is successful against a variety of biometrics; 2) is successful against more users; and 3) requires fewest attempts for successful attacks. More specifically, across five diverse biometrics, for the first attempt, on average our attack is 18 percentage points more successful than the second best (37% vs. 19%). Similarly, for the fifth attack attempt, on average our attack is 18 percentage points more successful than the second best (67% vs. 49%). We propose and evaluate a mechanism that can detect the more devastating statistical attacks. False rejects in biometric systems are common and by distinguishing statistical attacks from false rejects, our defence improves usability and security. The evaluation of the proposed detection mechanism shows its ability to detect on average 94% of the tested statistical attacks with an average probability of 3% to detect false rejects as a statistical attack. Given the serious threat posed by statistical attacks to biometrics that are used today (e.g., voice), our work highlights the need for defending against these attacks.

重新审视生物识别认证系统对抗统计攻击的安全性
行为生物识别的独特性(例如,声音或击键模式)受到了最近工作的挑战。统计攻击已经提出,推断一般人口统计和目标行为生物特征针对一个特定的受害者。我们表明,尽管这些方法取得了成功,但由于这些生物特征的用户行为重叠的不同性质,这些方法需要多次尝试才能成功攻击不同的生物特征。此外,迄今为止还没有提出检测统计攻击的机制。在这项工作中,我们提出了一种新的基于超容量的统计攻击,并表明与现有的方法不同,它:1)对各种生物特征都是成功的;2)获得更多用户的青睐;3)需要最少的成功攻击尝试。更具体地说,在五种不同的生物识别技术中,对于第一次攻击,我们的攻击成功率平均比第二次攻击高出18个百分点(37%对19%)。同样,对于第五次攻击尝试,我们的攻击成功率平均比第二次高出18个百分点(67%对49%)。我们提出并评估了一种可以检测更具破坏性的统计攻击的机制。生物识别系统中的错误拒绝是常见的,通过区分统计攻击和错误拒绝,我们的防御提高了可用性和安全性。对所提出的检测机制的评估表明,它能够检测到平均94%的测试统计攻击,平均概率为3%,将虚假拒绝检测为统计攻击。鉴于统计攻击对当今使用的生物识别技术(例如语音)构成的严重威胁,我们的工作强调了防御这些攻击的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security Computer Science-General Computer Science
CiteScore
5.20
自引率
0.00%
发文量
52
期刊介绍: ACM Transactions on Privacy and Security (TOPS) (formerly known as TISSEC) publishes high-quality research results in the fields of information and system security and privacy. Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信