{"title":"EarSlide: a Secure Ear Wearables Biometric Authentication Based on Acoustic Fingerprint","authors":"Zi Wang, Yilin Wang, Jie Yang","doi":"10.1145/3643515","DOIUrl":null,"url":null,"abstract":"Ear wearables (earables) are emerging platforms that are broadly adopted in various applications. There is an increasing demand for robust earables authentication because of the growing amount of sensitive information and the IoT devices that the earable could access. Traditional authentication methods become less feasible due to the limited input interface of earables. Nevertheless, the rich head-related sensing capabilities of earables can be exploited to capture human biometrics. In this paper, we propose EarSlide, an earable biometric authentication system utilizing the advanced sensing capacities of earables and the distinctive features of acoustic fingerprints when users slide their fingers on the face. It utilizes the inward-facing microphone of the earables and the face-ear channel of the ear canal to reliably capture the acoustic fingerprint. In particular, we study the theory of friction sound and categorize the characteristics of the acoustic fingerprints into three representative classes, pattern-class, ridge-groove-class, and coupling-class. Different from traditional fingerprint authentication only utilizes 2D patterns, we incorporate the 3D information in acoustic fingerprint and indirectly sense the fingerprint for authentication. We then design representative sliding gestures that carry rich information about the acoustic fingerprint while being easy to perform. It then extracts multi-class acoustic fingerprint features to reflect the inherent acoustic fingerprint characteristic for authentication. We also adopt an adaptable authentication model and a user behavior mitigation strategy to effectively authenticate legit users from adversaries. The key advantages of EarSlide are that it is resistant to spoofing attacks and its wide acceptability. Our evaluation of EarSlide in diverse real-world environments with intervals over one year shows that EarSlide achieves an average balanced accuracy rate of 98.37% with only one sliding gesture.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"27 19","pages":"24:1-24:29"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3643515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ear wearables (earables) are emerging platforms that are broadly adopted in various applications. There is an increasing demand for robust earables authentication because of the growing amount of sensitive information and the IoT devices that the earable could access. Traditional authentication methods become less feasible due to the limited input interface of earables. Nevertheless, the rich head-related sensing capabilities of earables can be exploited to capture human biometrics. In this paper, we propose EarSlide, an earable biometric authentication system utilizing the advanced sensing capacities of earables and the distinctive features of acoustic fingerprints when users slide their fingers on the face. It utilizes the inward-facing microphone of the earables and the face-ear channel of the ear canal to reliably capture the acoustic fingerprint. In particular, we study the theory of friction sound and categorize the characteristics of the acoustic fingerprints into three representative classes, pattern-class, ridge-groove-class, and coupling-class. Different from traditional fingerprint authentication only utilizes 2D patterns, we incorporate the 3D information in acoustic fingerprint and indirectly sense the fingerprint for authentication. We then design representative sliding gestures that carry rich information about the acoustic fingerprint while being easy to perform. It then extracts multi-class acoustic fingerprint features to reflect the inherent acoustic fingerprint characteristic for authentication. We also adopt an adaptable authentication model and a user behavior mitigation strategy to effectively authenticate legit users from adversaries. The key advantages of EarSlide are that it is resistant to spoofing attacks and its wide acceptability. Our evaluation of EarSlide in diverse real-world environments with intervals over one year shows that EarSlide achieves an average balanced accuracy rate of 98.37% with only one sliding gesture.