Proceedings of the 2006 symposium on Eye tracking research & applications最新文献

筛选
英文 中文
Gaze-contingent temporal filtering of video 基于注视的视频时间滤波
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117353
Martin Böhme, M. Dorr, T. Martinetz, E. Barth
{"title":"Gaze-contingent temporal filtering of video","authors":"Martin Böhme, M. Dorr, T. Martinetz, E. Barth","doi":"10.1145/1117309.1117353","DOIUrl":"https://doi.org/10.1145/1117309.1117353","url":null,"abstract":"We describe an algorithm for manipulating the temporal resolution of a video in real time, contingent upon the viewer's direction of gaze. The purpose of this work is to study the effect that a controlled manipulation of the temporal frequency content in real-world scenes has on eye movements. We build on the work of Perry and Geisler [1998; 2002], who manipulate spatial resolution as a function of gaze direction, allowing them to mimic the resolution distribution of the human retina or to simulate the effect of various diseases (e.g. glaucoma).Our temporal filtering algorithm is similar to that of Perry and Geisler in that we interpolate between the levels of a multiresolution pyramid. However, in our case, the pyramid is built along the temporal dimension, and this requires careful management of the buffering of video frames and of the order in which the filtering operations are performed. On a standard personal computer, the algorithm achieves real-time performance (30 frames per second) on high-resolution videos (960 by 540 pixels).We present experimental results showing that the manipulation performed by the algorithm reduces the number of high-amplitude saccades and can remain unnoticed by the observer.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Empathic tutoring software agents using real-time eye tracking 移情辅导软件代理使用实时眼动追踪
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117346
Hua Wang, M. Chignell, M. Ishizuka
{"title":"Empathic tutoring software agents using real-time eye tracking","authors":"Hua Wang, M. Chignell, M. Ishizuka","doi":"10.1145/1117309.1117346","DOIUrl":"https://doi.org/10.1145/1117309.1117346","url":null,"abstract":"This paper describes an empathic software agent (ESA) interface using eye movement information to facilitate empathy-relevant reasoning and behavior. Eye movement tracking is used to monitor user's attention and interests, and to personalize the agent behaviors. The system reacts to user eye information in real-time, recording eye gaze and pupil dilation data during the learning process. Based on these measures, the ESA infers the focus of attention and motivational status of the learner and responds accordingly with affective (display of emotion) and instructional behaviors. In addition to describing the design and implementation of empathic software agents, this paper will report on some preliminary usability test results concerning how users respond to the empathic functions that are provided.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121915423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
Computational mechanisms for gaze direction in interactive visual environments 交互视觉环境中凝视方向的计算机制
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117315
Robert J. Peters, L. Itti
{"title":"Computational mechanisms for gaze direction in interactive visual environments","authors":"Robert J. Peters, L. Itti","doi":"10.1145/1117309.1117315","DOIUrl":"https://doi.org/10.1145/1117309.1117315","url":null,"abstract":"Next-generation immersive virtual environments and video games will require virtual agents with human-like visual attention and gaze behaviors. A critical step is to devise efficient visual processing heuristics to select locations that would attract human gaze in complex dynamic environments. One promising approach to designing such heuristics draws on ideas from computational neuroscience. We compared several such heuristics with eye movement recordings from five observers playing video games, and found that heuristics which detect outliers from the global distribution of visual features were better predictors of human gaze than were purely local heuristics. Heuristics sensitive to dynamic events performed best overall. Further, heuristic prediction power differed more between games than between different human observers. Our findings suggest simple neurally-inspired algorithmic methods to predict where humans look while playing video games.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123246053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Communication through eye-gaze: where we have been, where we are now and where we can go from here 通过眼神交流:我们曾经去过哪里,现在在哪里,从这里我们可以去哪里
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117311
H. Istance
{"title":"Communication through eye-gaze: where we have been, where we are now and where we can go from here","authors":"H. Istance","doi":"10.1145/1117309.1117311","DOIUrl":"https://doi.org/10.1145/1117309.1117311","url":null,"abstract":"Throughout the history of gaze tracking, there have been several dimensions along which the evolution of gaze-based communication can be viewed. Perhaps the most important of these dimensions is the ease of use, or usability, of systems incorporating eye tracking. Usable communication through eye-gaze has been a goal for many years and offers the prospect of effortless and fast communication for able bodied and disabled users alike. To date such communication has been hampered by a number of problems limiting its widespread uptake. Systems have evolved over time and can provide effective means of communication within restricted bounds, but these are typically incompatible and limited to a few application areas, and each has suffered from particular usability problems. As a consequence uptake remains low and the cost of individual eyetracking systems remains high. However, more is being understood and published about the usability requirements for eye-gaze communication systems, particularly for users with different types of disability. With the advance of research and technology we are now seeing genuinely usable systems which can be used for a broad range of applications and, with this, the prospect of much wider acceptance of gaze as a means of communication.A second dimension is how we can utilise our communication through eye gaze. Much work has been undertaken addressing the nature of control and the concepts of active and passive control, or command-based and non-command based interaction. Active control and the giving of commands by eye to on-screen keyboards and other control interfaces is now well known and has lead to greatly improved usability via compensation for the limitations of eyetrackers as a data source, and by providing predictive and corrective capabilities to the user interface. Passive monitoring of gaze position leads to the notion of gaze-aware objects which are capable of responding to user attention in a way appropriate to the specific task context. Early work by Starker and Bolt [1990], for example, assigned objects in a virtual world gaze based indices of interest where control was mediated by system evaluation of user interest without the need for active user control. By employing these concepts current gaze control systems have now achieved acceptable ease of use by making on-screen objects gaze-aware, allowing compensation for tracking and manipulation inaccuracies. Gaze aware interaction is now migrating from the confines of the desktop to the user task space in the real world within a ubiquitous computing context. Instead of attempting to track gaze position in world space relative to the user, with the many difficulties this presents in inaccuracies and encumbering equipment, gaze tracking can be moved to many ubiquitous objects in the real world. Visual contact and manipulation with gaze aware instrumented objects is now possible by equipping objects with eye-contact sensors detecting infra-red corneal reflecti","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127394541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Speech-augmented eye gaze interaction with small closely spaced targets 语音增强眼注视与小的近距离目标的相互作用
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117345
D. Miniotas, O. Špakov, I. Tugoy, I. MacKenzie
{"title":"Speech-augmented eye gaze interaction with small closely spaced targets","authors":"D. Miniotas, O. Špakov, I. Tugoy, I. MacKenzie","doi":"10.1145/1117309.1117345","DOIUrl":"https://doi.org/10.1145/1117309.1117345","url":null,"abstract":"Eye trackers have been used as pointing devices for a number of years. Due to inherent limitations in the accuracy of eye gaze, however, interaction is limited to objects spanning at least one degree of visual angle. Consequently, targets in gaze-based interfaces have sizes and layouts quite distant from \"natural settings\". To accommodate accuracy constraints, we developed a multimodal pointing technique combining eye gaze and speech inputs. The technique was tested in a user study on pointing at multiple targets. Results suggest that in terms of a footprint-accuracy tradeoff, pointing performance is best (~93%) for targets subtending 0.85 degrees with 0.3-degree gaps between them. User performance is thus shown to approach the limit of practical pointing. Effectively, developing a user interface that supports hands-free interaction and has a design similar to today's common interfaces is feasible.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Visual attention in 3D video games 3D电子游戏中的视觉注意力
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117327
Su Yan, M. S. El-Nasr
{"title":"Visual attention in 3D video games","authors":"Su Yan, M. S. El-Nasr","doi":"10.1145/1117309.1117327","DOIUrl":"https://doi.org/10.1145/1117309.1117327","url":null,"abstract":"Visual attention has long been an important topic in psychology and cognitive science. Recently, results from visual attention research (Haber et al. 2001; Myszkowski et al. 2001) are being adopted by computer graphics research. Due to speed limitations, there has been a movement to use a perception-based rendering approach where the rendering process itself takes into account where the user is most likely looking (Haber et al. 2001). Examples include trying to achieve real-time global illumination by concentrating the global illumination calculation only in parts of the scene that are salient (Myszkowski 2002). Video games have achieved a high degree of popularity because of such advances in computer graphics. These techniques are also important because they have enabled game environments to be used in applications such as health therapy and training.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129137622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Perspective error compensation of pupillography using glint images 利用闪烁图像补偿瞳孔成像的透视误差
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117330
InBum Lee, K. Park
{"title":"Perspective error compensation of pupillography using glint images","authors":"InBum Lee, K. Park","doi":"10.1145/1117309.1117330","DOIUrl":"https://doi.org/10.1145/1117309.1117330","url":null,"abstract":"This paper suggests a new method to compensate perspective error in measuring real size of pupil of pupillography for pupil light response. To get real size of pupil, the distance between cornea and camera should be calculated. The glints on the eye of the image were used to estimate the distance. The suggested method was validated using telecentric lens.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126560690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation of effects of horizontal camera slippage on corneal reflections 水平相机滑动对角膜反射影响的模拟
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117329
T. Haslwanter, C. Kitzmueller, M. Scheubmayr
{"title":"Simulation of effects of horizontal camera slippage on corneal reflections","authors":"T. Haslwanter, C. Kitzmueller, M. Scheubmayr","doi":"10.1145/1117309.1117329","DOIUrl":"https://doi.org/10.1145/1117309.1117329","url":null,"abstract":"While the video-based measurement of eye movements, also referred to as \"video-oculography\" (VOG), has many advantages, it also suffers from a serious disadvantage which has not been solved yet: using images of the eye, how can we distinguish between a movement of the eye-in-the-head on the one hand, and a movement of the camera with respect to the head on the other? To distinguish between the two, we need additional information about the orientation and position of the camera with respect to the head.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"649 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131825555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual attention focus prediction for multiple viewers in case of multimedia perceptual compression with feedback delay 带反馈延迟的多媒体感知压缩下多观众感知注意焦点预测
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117352
Oleg V. Komogortsev, J. Khan
{"title":"Perceptual attention focus prediction for multiple viewers in case of multimedia perceptual compression with feedback delay","authors":"Oleg V. Komogortsev, J. Khan","doi":"10.1145/1117309.1117352","DOIUrl":"https://doi.org/10.1145/1117309.1117352","url":null,"abstract":"Human eyes have limited perception capabilities. Only 2 degrees of our 180 degree vision field provide the highest quality of perception. Due to this fact the idea of perceptual attention focus emerged to allow a visual content to be changed in a way that only part of the visual field where a human attention is directed to is encoded with a high quality. The image quality in the periphery can be reduced without a viewer noticing it. This compression approach allows a significant decrease in bit-rate for a video stream, and in the case of the 3D stream rendering, it decreases the computational burden. A number of previous researchers have investigated the topic of real-time perceptual attention focus but only for a single viewer. In this paper we investigate a dynamically changing multi-viewer scenario. In this type of scenario a number of people are watching the same visual content at the same time. Each person is using eye-tracking equipment. The visual content (video, 3D stream) is sent through a network with a large transmission delay. The area of the perceptual attention focus is predicted for the viewers to compensate for the delay value and identify the area of the image which requires highest quality coding.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134129935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Performance of the two-stage, dual-mode oculomotor servo system 两级双模动眼器伺服系统的性能
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117314
J. T. Fulton
{"title":"Performance of the two-stage, dual-mode oculomotor servo system","authors":"J. T. Fulton","doi":"10.1145/1117309.1117314","DOIUrl":"https://doi.org/10.1145/1117309.1117314","url":null,"abstract":"Creating an optimum man-machine interface in the visual domain requires detailed knowledge of the human Precision Optical Servo system (POS) with particular focus on the oculomotor servosystem. The physiology of the human oculomotor system is more advanced than that of even his closest animal relatives. To satisfy the human's needs for defensive safety as well as provide the analytical capability supporting his inquisitiveness, two primary modes of information analysis are provided within the POS. Within the analytical mode, humans employ a multidimensional, fundamentally luminance-based, narrow-field-of-view associative correlator for interp and percept extraction. This associative correlation process relies upon the orthogonal phase coherent character of the \"tremor\" associated with the fine motions of the eyes. Simultaneously, a lower resolution, fundamentally luminance-based, spatial-change-detection mechanism is used to maintain awareness of a larger external environment. To support the above modes, a two-stage servomechanism is used. The angular performances of these two stages are quite different. These differences have a profound impact on good interface design.This paper provides a schematic of the complete POS, a more detailed description of the oculomotor servosystem, and the numerics describing its performance parameters. These parameters lead to the minimum recognition interval required for symbolic displays. Color is shown to play an ancillary, though important, role in the capability of the POS of the visual system. A New Chromaticity Diagram is offered that makes it easier to understand the role of color in POS operation and in color perception in general. All of the above descriptions are supported by a larger scale schematic of the overall sensory/cognitive system.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130290194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信