{"title":"EasyGaze:适用于手持移动设备的混合眼动追踪方法","authors":"Shiwei Cheng, Qiufeng Ping, Jialing Wang, Yijian Chen","doi":"10.1016/j.vrih.2021.10.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Eye-tracking technology for mobile devices has made significant progress. However, owing to limited computing capacity and the complexity of context, the conventional image feature-based technology cannot extract features accurately, thus affecting the performance.</p></div><div><h3>Methods</h3><p>This study proposes a novel approach by combining appearance- and feature-based eye-tracking methods. Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points. The feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated.</p></div><div><h3>Results</h3><p>To obtain feature vectors with the best performance, we compared different vectors under different image resolution and illumination conditions, and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93° when the image resolution was 96 × 48 pixels, with light sources illuminating from the front of the eye.</p></div><div><h3>Conclusions</h3><p>Compared with the current methods, our method improved the accuracy of gaze fixation and it was more usable.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 173-188"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000146/pdf?md5=1b38d9d1e71d71edf48eca09534dcf7b&pid=1-s2.0-S2096579622000146-main.pdf","citationCount":"7","resultStr":"{\"title\":\"EasyGaze: Hybrid eye tracking approach for handheld mobile devices\",\"authors\":\"Shiwei Cheng, Qiufeng Ping, Jialing Wang, Yijian Chen\",\"doi\":\"10.1016/j.vrih.2021.10.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><p>Eye-tracking technology for mobile devices has made significant progress. However, owing to limited computing capacity and the complexity of context, the conventional image feature-based technology cannot extract features accurately, thus affecting the performance.</p></div><div><h3>Methods</h3><p>This study proposes a novel approach by combining appearance- and feature-based eye-tracking methods. Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points. The feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated.</p></div><div><h3>Results</h3><p>To obtain feature vectors with the best performance, we compared different vectors under different image resolution and illumination conditions, and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93° when the image resolution was 96 × 48 pixels, with light sources illuminating from the front of the eye.</p></div><div><h3>Conclusions</h3><p>Compared with the current methods, our method improved the accuracy of gaze fixation and it was more usable.</p></div>\",\"PeriodicalId\":33538,\"journal\":{\"name\":\"Virtual Reality Intelligent Hardware\",\"volume\":\"4 2\",\"pages\":\"Pages 173-188\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2096579622000146/pdf?md5=1b38d9d1e71d71edf48eca09534dcf7b&pid=1-s2.0-S2096579622000146-main.pdf\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Virtual Reality Intelligent Hardware\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2096579622000146\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579622000146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
EasyGaze: Hybrid eye tracking approach for handheld mobile devices
Background
Eye-tracking technology for mobile devices has made significant progress. However, owing to limited computing capacity and the complexity of context, the conventional image feature-based technology cannot extract features accurately, thus affecting the performance.
Methods
This study proposes a novel approach by combining appearance- and feature-based eye-tracking methods. Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points. The feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated.
Results
To obtain feature vectors with the best performance, we compared different vectors under different image resolution and illumination conditions, and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93° when the image resolution was 96 × 48 pixels, with light sources illuminating from the front of the eye.
Conclusions
Compared with the current methods, our method improved the accuracy of gaze fixation and it was more usable.