Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications最新文献

筛选
英文 中文
Hand- and gaze-control of telepresence robots 远程呈现机器人的手和目光控制
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317956.3318149
Guangtao Zhang, J. P. Hansen, Katsumi Minakata
{"title":"Hand- and gaze-control of telepresence robots","authors":"Guangtao Zhang, J. P. Hansen, Katsumi Minakata","doi":"10.1145/3317956.3318149","DOIUrl":"https://doi.org/10.1145/3317956.3318149","url":null,"abstract":"Mobile robotic telepresence systems are increasingly used to promote social interaction between geographically dispersed people. People with severe motor disabilities may use eye-gaze to control a telepresence robots. However, use of gaze control for navigation of robots needs to be explored. This paper presents an experimental comparison between gaze-controlled and hand-controlled telepresence robots with a head-mounted display. Participants (n = 16) had similar experience of presence and self-assessment, but gaze control was 31% slower than hand control. Gaze-controlled robots had more collisions and higher deviations from optimal paths. Moreover, with gaze control, participants reported a higher workload, a reduced feeling of dominance, and their situation awareness was significantly degraded. The accuracy of their post-trial reproduction of the maze layout and the trial duration were also significantly lower.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130718384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
POITrack
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3321491
F. Göbel, P. Kiefer
{"title":"POITrack","authors":"F. Göbel, P. Kiefer","doi":"10.1145/3314111.3321491","DOIUrl":"https://doi.org/10.1145/3314111.3321491","url":null,"abstract":"","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130843240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iris 虹膜
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3318228
Sarah D'Angelo, Jeff Brewer, D. Gergle
{"title":"Iris","authors":"Sarah D'Angelo, Jeff Brewer, D. Gergle","doi":"10.1145/3314111.3318228","DOIUrl":"https://doi.org/10.1145/3314111.3318228","url":null,"abstract":"","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130897376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pointing by gaze, head, and foot in a head-mounted display 在头戴式显示器中通过凝视,头部和脚指向
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317956.3318150
Katsumi Minakata, J. P. Hansen, I. Mackenzie, Per Baekgaard, Vijay Rajanna
{"title":"Pointing by gaze, head, and foot in a head-mounted display","authors":"Katsumi Minakata, J. P. Hansen, I. Mackenzie, Per Baekgaard, Vijay Rajanna","doi":"10.1145/3317956.3318150","DOIUrl":"https://doi.org/10.1145/3317956.3318150","url":null,"abstract":"This paper presents a Fitts' law experiment and a clinical case study performed with a head-mounted display (HMD). The experiment compared gaze, foot, and head pointing. With the equipment setup we used, gaze was slower than the other pointing methods, especially in the lower visual field. Throughputs for gaze and foot pointing were lower than mouse and head pointing and their effective target widths were also higher. A follow-up case study included seven participants with movement disorders. Only two of the participants were able to calibrate for gaze tracking but all seven could use head pointing, although with throughput less than one-third of the non-clinical participants.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Semantic gaze labeling for human-robot shared manipulation 人机共享操作的语义注视标记
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319840
Reuben M. Aronson, H. Admoni
{"title":"Semantic gaze labeling for human-robot shared manipulation","authors":"Reuben M. Aronson, H. Admoni","doi":"10.1145/3314111.3319840","DOIUrl":"https://doi.org/10.1145/3314111.3319840","url":null,"abstract":"Human-robot collaboration systems benefit from recognizing people's intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users' intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a finite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126279801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Aiming for the quiet eye in biathlon 目标是在冬季两项比赛中保持安静
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319850
D. Hansen, Amelie Heinrich, R. Cañal-Bruland
{"title":"Aiming for the quiet eye in biathlon","authors":"D. Hansen, Amelie Heinrich, R. Cañal-Bruland","doi":"10.1145/3314111.3319850","DOIUrl":"https://doi.org/10.1145/3314111.3319850","url":null,"abstract":"The duration of the so-called \"Quiet Eye\" (QE) - the final fixation before the initiation of a critical movement - seems to be linked to better perceptual-motor performances in various domains. For instance, experts show longer QE durations when compared to their less skilled counterparts. The aim of this paper was to replicate and extend previous work on the QE [Vickers and Williams 2007] in elite biathletes in an ecologically valid environment. Specifically, we tested whether longer QE durations result in higher shooting accuracy. To this end, we developed a gun-mounted eye tracker as a means to obtain reliable gaze data without interfering with the athletes' performance routines. During regular training protocols we collected gaze and performance data of 9 members (age 19.8 ± 0.45) of the German national junior team. The results did not show a significant effect of QE duration on shooting performance. Based on our findings, we critically discuss various conceptual as well as methodological issues with the QE literature that need to be aligned in future research to resolve current inconsistencies.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134280199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motion tracking of iris features for eye tracking 运动追踪的虹膜特征用于眼动追踪
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3322872
A. Chaudhary
{"title":"Motion tracking of iris features for eye tracking","authors":"A. Chaudhary","doi":"10.1145/3314111.3322872","DOIUrl":"https://doi.org/10.1145/3314111.3322872","url":null,"abstract":"Current video-based eye trackers fail to acquire a high signal-to-noise (SNR) ratio which is crucial for specific applications like interactive systems, event detection, the study of various eye movements, and most importantly estimating the gaze position with high certainty. Specifically, current video-based eye trackers over-rely on precise localization of the pupil boundary and/or corneal reflection (CR) for gaze tracking, which often results in inaccuracies and large sample-to-sample root mean square (RMS-S2S). Therefore, it is crucial to address the shortcomings of these trackers, and we plan to study a new video-based eye tracking methodology focused on simultaneously tracking the motion of many iris features and investigate its implications for obtaining high accuracy and precision. In our preliminary work, the method has shown great potential for robust detection of microsaccades over 0.2 degrees with high confidence. Hence, we plan to explore and optimize this technique.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Using developer eye movements to externalize the mental model used in code summarization tasks 使用开发人员的眼球运动来具体化代码总结任务中使用的心智模型
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319834
Nahla J. Abid, Jonathan I. Maletic, Bonita Sharif
{"title":"Using developer eye movements to externalize the mental model used in code summarization tasks","authors":"Nahla J. Abid, Jonathan I. Maletic, Bonita Sharif","doi":"10.1145/3314111.3319834","DOIUrl":"https://doi.org/10.1145/3314111.3319834","url":null,"abstract":"Eye movements of developers are used to speculate the mental cognition model (i.e., bottom-up or top-down) applied during program comprehension tasks. The cognition models examine how programmers understand source code by describing the temporary information structures in the programmer's short term memory. The two types of models that we are interested in are top-down and bottom-up. The top-down model is normally applied as-needed (i.e., the domain of the system is familiar). The bottom-up model is typically applied when a developer is not familiar with the domain or the source code. An eye-tracking study of 18 developers reading and summarizing Java methods is used as our dataset for analyzing the mental cognition model. The developers provide a written summary for methods assigned to them. In total, 63 methods are used from five different systems. The results indicate that on average, experts and novices read the methods more closely (using the bottom-up mental model) than bouncing around (using top-down). However, on average novices spend longer gaze time performing bottom-up (66s.) compared to experts (43s.)","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121073001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Assessing surgeons' skill level in laparoscopic cholecystectomy using eye metrics 用眼指标评估腹腔镜胆囊切除术外科医生的技术水平
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319832
N. Gunawardena, Michael Matscheko, B. Anzengruber, A. Ferscha, Martin Schobesberger, A. Shamiyeh, B. Klugsberger, P. Solleder
{"title":"Assessing surgeons' skill level in laparoscopic cholecystectomy using eye metrics","authors":"N. Gunawardena, Michael Matscheko, B. Anzengruber, A. Ferscha, Martin Schobesberger, A. Shamiyeh, B. Klugsberger, P. Solleder","doi":"10.1145/3314111.3319832","DOIUrl":"https://doi.org/10.1145/3314111.3319832","url":null,"abstract":"Laparoscopic surgery has revolutionised state of the art in surgical health care. However, its complexity puts a significant burden on the surgeon's cognitive resources resulting in major biliary injuries. With the increasing number of laparoscopic surgeries, it is crucial to identify surgeons' cognitive loads (CL) and levels of focus in real time to give them unobtrusive feedback when detecting the suboptimal level of attention. Assuming that the experts appear to be more focused on attention, we investigate how the skill level of surgeons during live surgery is reflected through eye metrics. Forty-two laparoscopic surgeries have been conducted with four surgeons who have different expertise levels. Concerning eye metrics, we have used six metrics which belong to fixation and pupillary based metrics. With the use of mean, standard deviation and ANOVA test we have proven three reliable metrics which we can use to differentiate the skill level during live surgeries. In future studies, these three metrics will be used to classify the surgeons' cognitive load and level of focus during the live surgery using machine learning techniques.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117282565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Improving real-time CNN-based pupil detection through domain-specific data augmentation 通过特定领域数据增强改进基于cnn的实时瞳孔检测
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319914
Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei
{"title":"Improving real-time CNN-based pupil detection through domain-specific data augmentation","authors":"Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei","doi":"10.1145/3314111.3319914","DOIUrl":"https://doi.org/10.1145/3314111.3319914","url":null,"abstract":"Deep learning is a promising technique for real-world pupil detection. However, the small amount of available accurately-annotated data poses a challenge when training such networks. Here, we utilize non-challenging eye videos where algorithmic approaches perform virtually without errors to automatically generate a foundational data set containing subpixel pupil annotations. Then, we propose multiple domain-specific data augmentation methods to create unique training sets containing controlled distributions of pupil-detection challenges. The feasibility, convenience, and advantage of this approach is demonstrated by training a CNN with these datasets. The resulting network outperformed current methods in multiple publicly-available, realistic, and challenging datasets, despite being trained solely with the augmented eye images. This network also exhibited better generalization w.r.t. the latest state-of-the-art CNN: Whereas on datasets similar to training data, the nets displayed similar performance, on datasets unseen to both networks, ours outperformed the state-of-the-art by ≈27% in terms of detection rate.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124115474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信