EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements

Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang
{"title":"EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements","authors":"Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang","doi":"10.1145/3534621","DOIUrl":null,"url":null,"abstract":"This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"2013 1","pages":"62:1-62:24"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3534621","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.
EarIO:一种低功耗声学传感耳机,用于连续跟踪详细的面部运动
本文介绍了EarIO,一种人工智能驱动的声学传感技术,允许可听设备(例如耳机)使用两对麦克风和扬声器(每侧一个)连续跟踪面部表情,这在商品耳机中广泛使用。它通过耳罩上的扬声器向脸部发射声音信号。根据面部表情的不同,耳朵周围的肌肉、组织和皮肤会发生不同的变形,从而在设备上的麦克风捕捉到的反射信号中产生独特的回声剖面。这些接收到的声音信号通过定制的深度学习管道进行处理和学习,以持续推断由TruthDepth相机捕获的52个参数所代表的完整面部表情。与同类技术相比,它的功耗显着降低,因为它可以在86 Hz的频率下采样,功率特征为154 mW。一项对16名参与者在三种不同场景下的用户研究表明,当参与者坐着、走路或重新安装设备后,EarIO可以可靠地估计出详细的面部运动。基于这些令人鼓舞的结果,我们进一步讨论了将EarIO应用于未来耳戴式可穿戴设备的潜在机遇和挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信