Proceedings of the Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Creating a new dynamic measure of the useful field of view using gaze-contingent displays 创建一个新的动态测量的有用的视野,使用注视条件显示
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578160
Ryan V. Ringer, A. Johnson, John G. Gaspar, M. Neider, J. Crowell, A. Kramer, Lester C. Loschky
{"title":"Creating a new dynamic measure of the useful field of view using gaze-contingent displays","authors":"Ryan V. Ringer, A. Johnson, John G. Gaspar, M. Neider, J. Crowell, A. Kramer, Lester C. Loschky","doi":"10.1145/2578153.2578160","DOIUrl":"https://doi.org/10.1145/2578153.2578160","url":null,"abstract":"We have developed a measure of transient changes in the useful field of view (UFOV) in simulators using gaze-contingent displays (GCDs). It can be used to evaluate safety-critical tasks such as driving or flight, and in training to increase the UFOV under cognitive load, stress, and fatigue. Unlike the established UFOV© measure, our measure can be used in simulators. Furthermore, previous peripheral detection tasks used in simulators controlled neither the target's retinal eccentricity nor stimulus intensity. Our approach overcomes these limitations by using GCDs to present stimuli producing equal performance across eccentricities under single-task conditions for two dependent measures: blur detection and Gabor orientation discrimination. We then measure attention under dual task conditions by varying cognitive load via an N-back task. Our results showed blur sensitivity varied predictably with retinal eccentricity, but detection of blur did not vary with cognitive load. Conversely, peripheral Gabor orientation discrimination showed a significant cognitive load decrement. While this method is still in development, the results suggest that a GC UFOV method is promising.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"51 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115658773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Machine-extracted eye gaze features: how well do they correlate to sight-reading abilities of piano players? 机器提取的眼睛注视特征:它们与钢琴演奏者的视读能力有多大关系?
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578194
B. Hoanca, T. C. Smith, Kenrick J. Mock
{"title":"Machine-extracted eye gaze features: how well do they correlate to sight-reading abilities of piano players?","authors":"B. Hoanca, T. C. Smith, Kenrick J. Mock","doi":"10.1145/2578153.2578194","DOIUrl":"https://doi.org/10.1145/2578153.2578194","url":null,"abstract":"Skilled piano players are able to decipher and play a musical piece they had never seen before (a skill known as sight-reading). For a sample of 23 piano players of various abilities we consider the correlation between machine-extracted gaze path features and the overall human rating. We find that correlation values (between machine-extracted gaze features and overall human ratings) are statistically similar to correlation values between human-extracted task-related ratings (e.g., note accuracy, error rate) and overall human ratings. These high correlation values suggest that an eye tracking-enabled computer could help students assess their sight-reading abilities, and could possibly advise students on how to improve. The approach could be extended to any musical instrument. For keyboard players, a MIDI keyboard with the appropriate software to provide information about note accuracy and timing could complement feedback from an eye tracker to enable more detailed analysis and advice.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124864077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
News stories relevance effects on eye-movements 新闻故事相关性对眼球运动的影响
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578198
J. Gwizdka
{"title":"News stories relevance effects on eye-movements","authors":"J. Gwizdka","doi":"10.1145/2578153.2578198","DOIUrl":"https://doi.org/10.1145/2578153.2578198","url":null,"abstract":"Relevance is a fundamental concept in information retrieval. We consider relevance from the user's perspective and ask if the degree of relevance can be inferred from eye-tracking data and if it is related to the cognitive effort involved in relevance judgments. To this end we conducted a study, in which participants were asked to find information in screen-long text documents containing news stories. Each participant responded to fourteen trials consisting of an information question followed by three documents each at a different level of relevance (irrelevant, partially relevant, and relevant). The results indicate that relevant documents tended to be continuously read, while irrelevant documents tended to be scanned. In most cases, cognitive effort inferred from eye-tracking data was highest for partially relevant documents and lowest for irrelevant documents.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A mixture distribution for visual foraging 视觉觅食的混合分布
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578210
P. Sarma, Tarunraj Singh
{"title":"A mixture distribution for visual foraging","authors":"P. Sarma, Tarunraj Singh","doi":"10.1145/2578153.2578210","DOIUrl":"https://doi.org/10.1145/2578153.2578210","url":null,"abstract":"Visual foraging is investigated by examining the nature of statistical distributions underlying human search strategies. Eye movements uninfluenced by scene perception or higher level cognition tasks are used to generate a data set which can be analyzed to study 'pure' searches. Eye movements in the form of 'jump' length constituting the entire search process are studied to detect the presence of statistical distributions whose parameters can be estimated. Animal ecology studies have reported the presence of a Lèvy flight/power law model, which explains animal foraging patterns in few species. We consider a Lèvy flight model to explain visual foraging. Results from data analysis, while not ruling out the presence of a power law entirely, point strongly towards the presence of a mixture distribution which faithfully explains visual foraging. This mixture distribution is made up of gamma distributions.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116983604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye tracking gaze visualiser: eye tracker and experimental software independent visualisation of gaze data 眼动追踪注视可视化器:眼动追踪器和实验软件对注视数据的独立可视化
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578191
B. Fehringer
{"title":"Eye tracking gaze visualiser: eye tracker and experimental software independent visualisation of gaze data","authors":"B. Fehringer","doi":"10.1145/2578153.2578191","DOIUrl":"https://doi.org/10.1145/2578153.2578191","url":null,"abstract":"Eye tracking research in disciplines such as cognitive psychology requires specific software packages designed for experiments supporting reaction time measurement, blocking and mixing of conditions and item randomisation. Although recording raw eye movement data is possible, its visualisation is difficult regarding the experimental design. The currently used eye tracking software is often built as an all-in-one program that can only visualise the eye tracking data recorded by itself. Therefore, in this paper a software tool is presented that visualises nearly any recorded eye tracking gaze data on the corresponding video independent of the specific software that runs the experiment. Summarised visualisations over randomised item presentations according to experimental conditions can be created. In addition to basic visualisation functionalities, further features such as simple object detection, repetitive pattern exploration and subset selection of subjects are provided.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ISeeCube: visual analysis of gaze data for video ISeeCube:视频凝视数据的视觉分析
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578158
K. Kurzhals, Florian Heimerl, D. Weiskopf
{"title":"ISeeCube: visual analysis of gaze data for video","authors":"K. Kurzhals, Florian Heimerl, D. Weiskopf","doi":"10.1145/2578153.2578158","DOIUrl":"https://doi.org/10.1145/2578153.2578158","url":null,"abstract":"We introduce a new design for the visual analysis of eye tracking data recorded from dynamic stimuli such as video. ISeeCube includes multiple coordinated views to support different aspects of various analysis tasks. It combines methods for the spatiotemporal analysis of gaze data recorded from unlabeled videos as well as the possibility to annotate and investigate dynamic Areas of Interest (AOIs). A static overview of the complete data set is provided by a space-time cube visualization that shows gaze points with density-based color mapping and spatiotemporal clustering of the data. A timeline visualization supports the analysis of dynamic AOIs and the viewers' attention on them. AOI-based scanpaths of different viewers can be clustered by their Levenshtein distance, an attention map, or the transitions between AOIs. With the provided visual analytics techniques, the exploration of eye tracking data recorded from several viewers is supported for a wide range of analysis tasks.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"39 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127986134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Gaze guidance for the visually impaired 为视障人士提供的视线引导
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2583038
Thomas C. Kübler, Enkelejda Kasneci, W. Rosenstiel
{"title":"Gaze guidance for the visually impaired","authors":"Thomas C. Kübler, Enkelejda Kasneci, W. Rosenstiel","doi":"10.1145/2578153.2583038","DOIUrl":"https://doi.org/10.1145/2578153.2583038","url":null,"abstract":"Visual perception is perhaps the most important sensory input. During driving, about 90% of the relevant information is related to the visual input [Taylor 1982]. However, the quality of visual perception decreases with age, mainly related to a reduce in the visual acuity or in consequence of diseases affecting the visual system. Amongst the most severe types of visual impairments are visual field defects (areas of reduced perception in the visual field), which occur as a consequence of diseases affecting the brain, e.g., stroke, brain injury, trauma, or diseases affecting the optic nerve, e.g., glaucoma. Due to demographic aging, the number of people with such visual impairments is expected to rise [Kasneci 2013]. Since persons suffering from visual impairments may overlook hazardous objects, they are prohibited from driving. This, however, leads to a decrease in quality of life, mobility, and participation in social life. Several studies have shown that some patients show a safe driving behavior despite their visual impairment by performing effective visual exploration, i.e., adequate eye and head movements (e.g., towards their visual field defect [Kasneci et al. 2014b]). Thus, a better understanding of visual perception mechanisms, i.e., of why and how we attend certain parts of our environment while \"ignoring\" others, is a key question to helping visually impaired persons in complex, real-life tasks, such as driving a car.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115372066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Collaborative eye tracking for image analysis 协同眼动追踪图像分析
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578215
Brendan David-John, S. Sridharan, Reynold J. Bailey
{"title":"Collaborative eye tracking for image analysis","authors":"Brendan David-John, S. Sridharan, Reynold J. Bailey","doi":"10.1145/2578153.2578215","DOIUrl":"https://doi.org/10.1145/2578153.2578215","url":null,"abstract":"We present a framework for collaborative image analysis where gaze information is shared across all users. A server gathers and broadcasts fixation data from/to all clients and the clients visualize this information. Several visualization options are provided. The system can run in real-time or gaze information can be recorded and shared the next time an image is accessed. Our framework is scalable to large numbers of clients with different eye tracking devices. To evaluate our system we used it within the context of a spot-the-differences game. Subjects were presented with 10 image pairs each containing 5 differences. They were given one minute to detect the differences in each image. Our study was divided into three sessions. In session 1, subjects completed the task individually, in session 2, pairs of subjects completed the task without gaze sharing, and in session 3, pairs of subjects completed the task with gaze sharing. We measured accuracy, time-to-completion and visual coverage over each image to evaluate the performance of subjects in each session. We found that visualizing shared gaze information by graying out previously scrutinized regions of an image significantly increases the dwell time in the areas of the images that are relevant to the task (i.e. the regions where differences actually occurred). Furthermore, accuracy and time-to-completion also improved over collaboration without gaze sharing though the effects were not significant. Our framework is useful for a wide range of image analysis applications which can benefit from a collaborative approach.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114686422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simulating refraction and reflection of ocular surfaces for algorithm validation in outdoor mobile eye tracking videos 模拟眼表面的折射和反射,用于室外移动眼动追踪视频的算法验证
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578203
Thomas B. Kinsman, J. Pelz
{"title":"Simulating refraction and reflection of ocular surfaces for algorithm validation in outdoor mobile eye tracking videos","authors":"Thomas B. Kinsman, J. Pelz","doi":"10.1145/2578153.2578203","DOIUrl":"https://doi.org/10.1145/2578153.2578203","url":null,"abstract":"To create input videos for testing pupil detection algorithms for outdoor eye tracking, we develop a simulation of the eye with front-surface reflections of the cornea and the internal refractions of the cornea and refraction at the air/cornea and cornea/aqueous boundaries. The scene and iris are simulated using texture mapping and are alpha-blended to produce the final image of the eye with reflections and refractions. The simulation of refraction is important in order to observe the elliptical shape that the pupil takes on as it goes off axis, and to take into consideration the difference between true pupil position and apparent (entrance) pupil position. Sequences of images are combined to produce input videos for testing the next generation of pupil detection and tracking algorithms, which must sort the pupil out of distracting edges and reflected objects.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130645113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An eye-tracking study assessing the comprehension of c++ and Python source code 一个眼动追踪研究,评估对c++和Python源代码的理解
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578218
Rachel Turner, Michael Falcone, Bonita Sharif, A. Lazar
{"title":"An eye-tracking study assessing the comprehension of c++ and Python source code","authors":"Rachel Turner, Michael Falcone, Bonita Sharif, A. Lazar","doi":"10.1145/2578153.2578218","DOIUrl":"https://doi.org/10.1145/2578153.2578218","url":null,"abstract":"A study to assess the effect of programming language on student comprehension of source code is presented, comparing the languages of C++ and Python in two task categories: overview and find bug tasks. Eye gazes are tracked while thirty-eight students complete tasks and answer questions. Results indicate no significant difference in accuracy or time, however there is a significant difference reported on the rate at which students look at buggy lines of code. These results start to provide some direction as to the effect programming language might have in introductory programming classes.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130686405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信