利用耳道传递函数进行面部表情识别

Takashi Amesaka, Hiroki Watanabe, M. Sugimoto
{"title":"利用耳道传递函数进行面部表情识别","authors":"Takashi Amesaka, Hiroki Watanabe, M. Sugimoto","doi":"10.1145/3341163.3347747","DOIUrl":null,"url":null,"abstract":"In this study, we propose a new input method for mobile and wearable computing using facial expressions. Facial muscle movements induce physical deformation in the ear canal. Our system utilizes such characteristics and estimates facial expressions using the ear canal transfer function (ECTF). Herein, a user puts on earphones with an equipped microphone that can record an internal sound of the ear canal. The system transmits ultrasonic band-limited swept sine signals and acquires the ECTF by analyzing the response. An important novelty feature of our method is that it is easy to incorporate into a product because the speaker and the microphone are equipped with many hearables, which is technically advanced electronic in-ear-device designed for multiple purposes. We investigated the performance of our proposed method for 21 facial expressions with 11 participants. Moreover, we proposed a signal correction method that reduces positional errors caused by attaching/detaching the device. The evaluation results confirmed that the f-score was 40.2% for the uncorrected signal method and 62.5% for the corrected signal method. We also investigated the practical performance of six facial expressions and confirmed that the f-score was 74.4% for the uncorrected signal method and 90.0% for the corrected signal method. We found the ECTF can be used for recognizing facial expressions with high accuracy equivalent to other related work.","PeriodicalId":112916,"journal":{"name":"Proceedings of the 2019 ACM International Symposium on Wearable Computers","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":"{\"title\":\"Facial expression recognition using ear canal transfer function\",\"authors\":\"Takashi Amesaka, Hiroki Watanabe, M. Sugimoto\",\"doi\":\"10.1145/3341163.3347747\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we propose a new input method for mobile and wearable computing using facial expressions. Facial muscle movements induce physical deformation in the ear canal. Our system utilizes such characteristics and estimates facial expressions using the ear canal transfer function (ECTF). Herein, a user puts on earphones with an equipped microphone that can record an internal sound of the ear canal. The system transmits ultrasonic band-limited swept sine signals and acquires the ECTF by analyzing the response. An important novelty feature of our method is that it is easy to incorporate into a product because the speaker and the microphone are equipped with many hearables, which is technically advanced electronic in-ear-device designed for multiple purposes. We investigated the performance of our proposed method for 21 facial expressions with 11 participants. Moreover, we proposed a signal correction method that reduces positional errors caused by attaching/detaching the device. The evaluation results confirmed that the f-score was 40.2% for the uncorrected signal method and 62.5% for the corrected signal method. We also investigated the practical performance of six facial expressions and confirmed that the f-score was 74.4% for the uncorrected signal method and 90.0% for the corrected signal method. We found the ECTF can be used for recognizing facial expressions with high accuracy equivalent to other related work.\",\"PeriodicalId\":112916,\"journal\":{\"name\":\"Proceedings of the 2019 ACM International Symposium on Wearable Computers\",\"volume\":\"119 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"42\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 ACM International Symposium on Wearable Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3341163.3347747\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM International Symposium on Wearable Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3341163.3347747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 42

摘要

在这项研究中,我们提出了一种新的面部表情输入方法,用于移动和可穿戴计算。面部肌肉运动引起耳道的物理变形。我们的系统利用这些特征,并使用耳道传递函数(ECTF)来估计面部表情。在这里,用户戴上带有麦克风的耳机,可以记录耳道内部的声音。该系统发送超声波限带扫频正弦信号,并通过对响应的分析,获得了其ECTF。我们的方法的一个重要的新颖之处在于,它很容易整合到一个产品中,因为扬声器和麦克风配备了许多可听设备,这是一种技术先进的多用途电子入耳设备。我们研究了我们提出的方法对21个面部表情的表现,其中有11个参与者。此外,我们还提出了一种信号校正方法,以减少因连接/分离设备而产生的位置误差。评价结果证实,未校正信号法的f值为40.2%,校正信号法的f值为62.5%。我们还调查了六种面部表情的实际表现,证实了未校正信号法的f得分为74.4%,校正信号法的f得分为90.0%。我们发现ECTF可以用于识别面部表情,准确度与其他相关工作相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Facial expression recognition using ear canal transfer function
In this study, we propose a new input method for mobile and wearable computing using facial expressions. Facial muscle movements induce physical deformation in the ear canal. Our system utilizes such characteristics and estimates facial expressions using the ear canal transfer function (ECTF). Herein, a user puts on earphones with an equipped microphone that can record an internal sound of the ear canal. The system transmits ultrasonic band-limited swept sine signals and acquires the ECTF by analyzing the response. An important novelty feature of our method is that it is easy to incorporate into a product because the speaker and the microphone are equipped with many hearables, which is technically advanced electronic in-ear-device designed for multiple purposes. We investigated the performance of our proposed method for 21 facial expressions with 11 participants. Moreover, we proposed a signal correction method that reduces positional errors caused by attaching/detaching the device. The evaluation results confirmed that the f-score was 40.2% for the uncorrected signal method and 62.5% for the corrected signal method. We also investigated the practical performance of six facial expressions and confirmed that the f-score was 74.4% for the uncorrected signal method and 90.0% for the corrected signal method. We found the ECTF can be used for recognizing facial expressions with high accuracy equivalent to other related work.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信