Proceedings of the Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Measuring and visualizing attention in space with 3D attention volumes 用三维注意力量测量和可视化空间中的注意力
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168560
Thies Pfeiffer
{"title":"Measuring and visualizing attention in space with 3D attention volumes","authors":"Thies Pfeiffer","doi":"10.1145/2168556.2168560","DOIUrl":"https://doi.org/10.1145/2168556.2168560","url":null,"abstract":"Knowledge about the point of regard is a major key for the analysis of visual attention in areas such as psycholinguistics, psychology, neurobiology, computer science and human factors. Eye tracking is thus an established methodology in these areas, e. g., for investigating search processes, human communication behavior, product design or human-computer interaction. As eye tracking is a process which depends heavily on technology, the progress of gaze use in these scientific areas is tied closely to the advancements of eye-tracking technology. It is thus not surprising that in the last decades, research was primarily based on 2D stimuli and rather static scenarios, regarding both content and observer. Only with the advancements in mobile and robust eye-tracking systems, the observer is freed to physically interact in a 3D target scenario. Measuring and analyzing the point of regards in 3D space, however, requires additional techniques for data acquisition and scientific visualization. We describe the process for measuring the 3D point of regard and provide our own implementation of this process, which extends recent approaches of combining eye tracking with motion capturing, including holistic estimations of the 3D point of regard. In addition, we present a refined version of 3D attention volumes for representing and visualizing attention in 3D space.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126312825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
The validity of using non-representative users in gaze communication research 注视交际研究中使用非代表性用户的有效性
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168603
H. Istance, Stephen Vickers, Aulikki Hyrskykari
{"title":"The validity of using non-representative users in gaze communication research","authors":"H. Istance, Stephen Vickers, Aulikki Hyrskykari","doi":"10.1145/2168556.2168603","DOIUrl":"https://doi.org/10.1145/2168556.2168603","url":null,"abstract":"Gaze-based interaction techniques have been investigated for the last two decades, and in many cases the evaluation of these has been based on trials with able-bodied users and conventional usability criteria, mainly speed and accuracy. The target user group of many of the gaze-based techniques investigated is, however, people with different types of physical disabilities. We present the outcomes of two studies that compare the performance of two groups of participants with a type of physical disability (one being cerebral palsy and the other muscular dystrophy) with that of a control group of able-bodied participants doing a task using a particular gaze interaction technique. One study used a task based on dwell-time selection, and the other used a task based on gaze gestures. In both studies, the groups of participants with physical disabilities performed significantly worse than the able-bodied control participants. We question the ecological validity of research into gaze interaction intended for people with physical disabilities that only uses able-bodied participants in evaluation studies without any testing using members of the target user population.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Gaze tracking in wide area using multiple camera observations 使用多摄像头观察广域注视跟踪
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168614
A. Utsumi, Kotaro Okamoto, N. Hagita, Kazuhiro Takahashi
{"title":"Gaze tracking in wide area using multiple camera observations","authors":"A. Utsumi, Kotaro Okamoto, N. Hagita, Kazuhiro Takahashi","doi":"10.1145/2168556.2168614","DOIUrl":"https://doi.org/10.1145/2168556.2168614","url":null,"abstract":"We propose a multi-camera-based gaze tracking system that provides a wide observation area. In our system, multiple camera observations are used to expand the detection area by employing mosaic observations. Each facial feature and eye region image can be observed by different cameras, and in contrast to stereo-based systems, no shared observations are required. This feature relaxes the geometrical constraints in terms of head orientation and camera viewpoints and realizes wide availability of gaze tracking with a small number of cameras. In experiments, we confirmed that our implemented system can track head rotation of 120° with two cameras. The gaze estimation accuracy is 5.4° horizontally and 9.7° vertically.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116246042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A flexible gaze tracking algorithm evaluation workbench 一个灵活的注视跟踪算法评估工作台
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168621
D. Droege, D. Paulus
{"title":"A flexible gaze tracking algorithm evaluation workbench","authors":"D. Droege, D. Paulus","doi":"10.1145/2168556.2168621","DOIUrl":"https://doi.org/10.1145/2168556.2168621","url":null,"abstract":"The development of gaze tracking algorithms is very much bound to the specific setup and properties of the respective system they are used in. This makes it hard e. g. to compare their performance. We propose Gazelnut, a modular system to ease the development and comparison of gaze tracking algorithms, which also makes it independent from the permanent access to specific hardware. Building on the message passing architecture of the \"robot operating system\" (ROS) the system provides a flexible base to record and replay sessions, record the input from multiple cameras, run exchangeable algorithms on such sessions, store their individual results on the recorded (or live) scene, run different algorithms in parallel to compare their results and attach additional diagnostic modules to the running system.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116604106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reading and estimating gaze on smart phones 阅读和估计智能手机上的注视
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168643
R. Biedert, A. Dengel, Georg Buscher, Arman Vartan
{"title":"Reading and estimating gaze on smart phones","authors":"R. Biedert, A. Dengel, Georg Buscher, Arman Vartan","doi":"10.1145/2168556.2168643","DOIUrl":"https://doi.org/10.1145/2168556.2168643","url":null,"abstract":"While lots of reading happens on mobile devices, little research has been performed on how the reading-interaction actually takes place. Therefore we describe our findings on a study conducted with 18 users which were asked to read a number of texts while their touch and gaze data was being recorded. We found three reader types and identified their preferred alignment of text on the screen. Based on our findings we are able to computationally estimate the reading area with an approximate .81 precision and .89 recall. Our computed reading speed estimate has an average 10.9% wpm error in contrast to the measured speed, and combining both techniques we can pinpoint the reading location at a given time with an overall word error of 9.26 words, or about three lines of text on our device.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122685010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Using ScanMatch scores to understand differences in eye movements between correct and incorrect solvers on physics problems 使用ScanMatch分数来理解正确和错误解决物理问题者的眼球运动差异
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168591
Adrian M. Madsen, Adam M. Larson, Lester C. Loschky, N. S. Rebello
{"title":"Using ScanMatch scores to understand differences in eye movements between correct and incorrect solvers on physics problems","authors":"Adrian M. Madsen, Adam M. Larson, Lester C. Loschky, N. S. Rebello","doi":"10.1145/2168556.2168591","DOIUrl":"https://doi.org/10.1145/2168556.2168591","url":null,"abstract":"Using a ScanMatch algorithm we investigate scan path differences between subjects who answer physics problems correctly and incorrectly. This algorithm bins a saccade sequence spatially and temporally, recodes this information to create a sequence of letters representing fixation location, duration and order, and compares two sequences to generate a similarity score. We recorded eye movements of 24 individuals on six physics problems containing diagrams with areas consistent with a novice-like response and areas of high perceptual salience. We calculated average ScanMatch similarity scores comparing correct solvers to one another (C-C), incorrect solvers to one another (I-I), and correct solvers to incorrect solvers (C-I). We found statistically significant differences between the C-C and I-I comparisons on only one of the problems. This seems to imply that top down processes relying on incorrect domain knowledge, rather than bottom up processes driven by perceptual salience, determine the eye movements of incorrect solvers.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127588845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Impact of subtle gaze direction on short-term spatial information recall 微妙凝视方向对短时空间信息回忆的影响
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168567
Reynold J. Bailey, Ann McNamara, Aaron Costello, S. Sridharan, C. Grimm
{"title":"Impact of subtle gaze direction on short-term spatial information recall","authors":"Reynold J. Bailey, Ann McNamara, Aaron Costello, S. Sridharan, C. Grimm","doi":"10.1145/2168556.2168567","DOIUrl":"https://doi.org/10.1145/2168556.2168567","url":null,"abstract":"Contents of Visual Short-Term Memory depend highly on viewer attention. It is possible to influence where attention is allocated using a technique called Subtle Gaze Direction (SGD). SGD combines eye tracking with subtle image-space modulations to guide viewer gaze about a scene. Modulations are terminated before the viewer can scrutinize them with high acuity foveal vision. This approach is preferred to overt techniques that require permanent alterations to images to highlight areas of interest. In our study, participants were asked to recall the location of objects or regions in images. We investigated if using SGD to guide attention to these regions would improve recall. Results showed that the influence of SGD significantly improved accuracy of target count and spatial location recall. This has implications for a wide range of applications including spatial learning in virtual environments as well as image search applications, virtual training and perceptually based rendering.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133894318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A GPU-accelerated software eye tracking system 一个gpu加速软件眼动追踪系统
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168612
J. Mulligan
{"title":"A GPU-accelerated software eye tracking system","authors":"J. Mulligan","doi":"10.1145/2168556.2168612","DOIUrl":"https://doi.org/10.1145/2168556.2168612","url":null,"abstract":"Current microcomputers are powerful enough to implement a realtime eye tracking system, but the computational throughput still limits the types of algorithms that can be implemented in real time. Many of the image processing algorithms that are typically used in eye tracking applications can be significantly accelerated when the processing is delegated to a graphics processing unit (GPU). This paper describes a real-time gaze tracking system developed using the CUDA programming environment distributed by nVidia. The current implementation of the system is capable of processing a 640 by 480 image in less than 4 milliseconds, and achieves an average accuracy close to 0.5 degrees of visual angle.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130367159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Voice activity detection from gaze in video mediated communication 视频媒介通信中基于注视的语音活动检测
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168628
Michal Hradiš, Shahram Eivazi, R. Bednarik
{"title":"Voice activity detection from gaze in video mediated communication","authors":"Michal Hradiš, Shahram Eivazi, R. Bednarik","doi":"10.1145/2168556.2168628","DOIUrl":"https://doi.org/10.1145/2168556.2168628","url":null,"abstract":"This paper discusses estimation of active speaker in multi-party video-mediated communication from gaze data of one of the participants. In the explored settings, we predict voice activity of participants in one room based on gaze recordings of a single participant in another room. The two rooms were connected by high definition, low delay audio and video links and the participants engaged in different activities ranging from casual discussion to simple problem-solving games. We treat the task as a classification problem. We evaluate several types of features and parameter settings in the context of Support Vector Machine classification framework. The results show that using the proposed approach vocal activity of a speaker can be correctly predicted in 89 % of the time for which the gaze data are available.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123212119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Learning eye movement patterns for characterization of perceptual expertise 学习眼动模式表征知觉经验
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2012-03-28 DOI: 10.1145/2168556.2168645
Rui Li, J. Pelz, P. Shi, Cecilia Ovesdotter Alm, Anne R. Haake
{"title":"Learning eye movement patterns for characterization of perceptual expertise","authors":"Rui Li, J. Pelz, P. Shi, Cecilia Ovesdotter Alm, Anne R. Haake","doi":"10.1145/2168556.2168645","DOIUrl":"https://doi.org/10.1145/2168556.2168645","url":null,"abstract":"Human perceptual expertise has significant influence on medical image inspection. However, little is known regarding whether experts differ in their cognitive processing or what effective visual strategies they employ for examining medical images. To remedy this, we conduct an eye tracking experiment and collect both eye movement and verbal description data from three groups of subjects with different medical training levels. Each subject examines and describes 42 photographic dermatological images. We then develop a hierarchical probabilistic framework to extract the common and unique eye movement patterns exhibited among multiple subjects' fixation and saccadic eye movements within each expertise-specific group. Furthermore, experts' annotations of thought units on the transcribed verbal descriptions are time-aligned with these eye movement patterns to identify their semantic meanings. In this work, we are able to uncover the manner in which these subjects alternated their viewing strategies over the course of inspection, and additionally extract their perceptual expertise so that it can be used for advanced medical image understanding.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125416891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信