EarPPG: Securing Your Identity with Your Ears

Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin
{"title":"EarPPG: Securing Your Identity with Your Ears","authors":"Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin","doi":"10.1145/3581641.3584070","DOIUrl":null,"url":null,"abstract":"Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581641.3584070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.
EarPPG:用耳朵保护你的身份
如今,可穿戴设备已经成为人们日常生活中不可或缺的小工具;尤其是无线耳机,近年来有了前所未有的发展,引起了人们对用户认证技术的兴趣和探索。嵌入在使用麦克风或其他模式的无线耳机中的传统用户身份验证方法容易受到环境因素的影响,例如大声噪音或闭塞。为了解决这一限制,我们引入了EarPPG,这是一种新的生物识别模式,利用独特的耳内光电体积脉搏波(PPG)信号,由用户独特的说话行为改变。当使用者说话时,肌肉运动引起血管几何形状的变化,引起独特的PPG信号变化。由于说话行为和PPG信号都是独一无二的,EarPPG结合了生物特征,提供了一种安全而模糊的认证解决方案。该系统首先对EarPPG信号进行检测和分割,提取有效特征,利用1D ReGRU网络构建用户认证模型。我们对25名人类参与者进行了全面的真实世界评估,分别达到了94.84%的准确率、0.95的准确率、召回率和f1分。此外,考虑到实际意义,我们进行了几个广泛的野外实验,包括身体运动、遮挡、照明和持久性。这项研究的总体结果有可能嵌入到未来的智能耳机设备中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信