Accuracy Improvement of Object Selection in Gaze Gesture Application using Deep Learning

M. Alfaroby E., S. Wibirama, I. Ardiyanto
{"title":"Accuracy Improvement of Object Selection in Gaze Gesture Application using Deep Learning","authors":"M. Alfaroby E., S. Wibirama, I. Ardiyanto","doi":"10.1109/ICITEE49829.2020.9271771","DOIUrl":null,"url":null,"abstract":"Gaze-based interaction is a crucial research area. Gaze gesture provides faster interaction between a user and a computer application because people naturally look at the object of interest before taking any other actions. Spontaneous gaze-gesture-based application uses gaze-gesture as an input modality without performing any calibration. The conventional eye tracking systems have a problem with low accuracy. In general, data captured by eye tracker contains errors and noise within gaze position signal. The errors and noise affect the performance of object selection in gaze gesture based application that controls digital contents on the display using smooth-pursuit eye movement. The conventional object selection method suffers from low accuracy (<80%). In this paper, we addressed this accuracy problem with a novel approach using deep learning. We exploited deep learning power to recognize the pattern of eye-gaze data. Long Short Term Memory (LSTM) is a deep learning architecture based on recurrent neural network (RNN). We used LSTM to perform object selection task. The dataset consisted of 34 participants taken from previous study of object selection technique of gaze gesture-based application. Our experimental results show that the proposed method achieved 96.17% of accuracy. In future, our result may be used as a guidance for developing gaze gesture application.","PeriodicalId":245013,"journal":{"name":"2020 12th International Conference on Information Technology and Electrical Engineering (ICITEE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 12th International Conference on Information Technology and Electrical Engineering (ICITEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICITEE49829.2020.9271771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Gaze-based interaction is a crucial research area. Gaze gesture provides faster interaction between a user and a computer application because people naturally look at the object of interest before taking any other actions. Spontaneous gaze-gesture-based application uses gaze-gesture as an input modality without performing any calibration. The conventional eye tracking systems have a problem with low accuracy. In general, data captured by eye tracker contains errors and noise within gaze position signal. The errors and noise affect the performance of object selection in gaze gesture based application that controls digital contents on the display using smooth-pursuit eye movement. The conventional object selection method suffers from low accuracy (<80%). In this paper, we addressed this accuracy problem with a novel approach using deep learning. We exploited deep learning power to recognize the pattern of eye-gaze data. Long Short Term Memory (LSTM) is a deep learning architecture based on recurrent neural network (RNN). We used LSTM to perform object selection task. The dataset consisted of 34 participants taken from previous study of object selection technique of gaze gesture-based application. Our experimental results show that the proposed method achieved 96.17% of accuracy. In future, our result may be used as a guidance for developing gaze gesture application.
利用深度学习提高注视手势应用中对象选择的准确性
基于注视的交互是一个重要的研究领域。凝视手势提供了用户和计算机应用程序之间更快的交互,因为人们在采取任何其他行动之前自然会先看一下感兴趣的对象。自发的基于注视手势的应用程序使用注视手势作为输入方式而不执行任何校准。传统的眼动追踪系统存在精度低的问题。通常,眼动仪捕获的数据在注视位置信号中存在误差和噪声。在基于注视手势的应用程序中,使用平滑追踪眼动来控制显示器上的数字内容,误差和噪声影响了对象选择的性能。传统的目标选择方法准确率较低(<80%)。在本文中,我们用一种使用深度学习的新方法解决了这个准确性问题。我们利用深度学习的能力来识别眼睛注视数据的模式。长短期记忆(LSTM)是一种基于递归神经网络(RNN)的深度学习架构。我们使用LSTM来执行对象选择任务。数据集由34名参与者组成,这些参与者来自先前基于注视手势的应用程序的对象选择技术研究。实验结果表明,该方法的准确率达到96.17%。未来,我们的研究结果可以作为开发注视手势应用程序的指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信