Gaze+Lip: Rapid, Precise and Expressive Interactions Combining Gaze Input and Silent Speech Commands for Hands-free Smart TV Control

Zixiong Su, Xinlei Zhang, N. Kimura, J. Rekimoto
{"title":"Gaze+Lip: Rapid, Precise and Expressive Interactions Combining Gaze Input and Silent Speech Commands for Hands-free Smart TV Control","authors":"Zixiong Su, Xinlei Zhang, N. Kimura, J. Rekimoto","doi":"10.1145/3448018.3458011","DOIUrl":null,"url":null,"abstract":"As eye-tracking technologies develop, gaze becomes more and more popular as an input modality. However, in situations that require fast and precise object selection, gaze is hard to use because of limited accuracy. We present Gaze+Lip, a hands-free interface that combines gaze and lip reading to enable rapid and precise remote controls when interacting with big displays. Gaze+Lip takes advantage of gaze for target selection and leverages silent speech to ensure accurate and reliable command execution in noisy scenarios such as watching TV or playing videos on a computer. For evaluation, we implemented a system on a TV, and conducted an experiment to compare our method with the dwell-based gaze-only input method. Results showed that Gaze+Lip outperformed the gaze-only approach in accuracy and input speed. Furthermore, subjective evaluations indicated that Gaze+Lip is easy to understand, easy to use, and has higher perceived speed than the gaze-only approach.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"180 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Symposium on Eye Tracking Research and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3448018.3458011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

As eye-tracking technologies develop, gaze becomes more and more popular as an input modality. However, in situations that require fast and precise object selection, gaze is hard to use because of limited accuracy. We present Gaze+Lip, a hands-free interface that combines gaze and lip reading to enable rapid and precise remote controls when interacting with big displays. Gaze+Lip takes advantage of gaze for target selection and leverages silent speech to ensure accurate and reliable command execution in noisy scenarios such as watching TV or playing videos on a computer. For evaluation, we implemented a system on a TV, and conducted an experiment to compare our method with the dwell-based gaze-only input method. Results showed that Gaze+Lip outperformed the gaze-only approach in accuracy and input speed. Furthermore, subjective evaluations indicated that Gaze+Lip is easy to understand, easy to use, and has higher perceived speed than the gaze-only approach.
凝视+嘴唇:快速,精确和富有表现力的互动,结合凝视输入和无声语音命令,实现免提智能电视控制
随着眼球追踪技术的发展,注视作为一种输入方式越来越受欢迎。然而,在需要快速和精确的对象选择的情况下,由于精度有限,凝视很难使用。我们展示了Gaze+Lip,这是一种结合了凝视和唇读的免提界面,可以在与大型显示器交互时实现快速精确的远程控制。Gaze+Lip利用凝视来选择目标,并利用无声语音来确保在嘈杂的场景下(如看电视或在电脑上播放视频)准确可靠地执行命令。为了进行评估,我们在电视上实现了一个系统,并进行了一个实验,将我们的方法与基于驻留的凝视输入法进行了比较。结果表明,Gaze+Lip在准确性和输入速度上优于Gaze -only方法。此外,主观评价表明,凝视+嘴唇易于理解,易于使用,并且比凝视方法具有更高的感知速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信