Proceedings of the 2006 symposium on Eye tracking research & applications最新文献

筛选
英文 中文
Effect of letter spacing on eye movements and reading performance 字母间距对眼球运动和阅读表现的影响
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117337
Yu-Chi Tai, J. Sheedy, John R. Hayes
{"title":"Effect of letter spacing on eye movements and reading performance","authors":"Yu-Chi Tai, J. Sheedy, John R. Hayes","doi":"10.1145/1117309.1117337","DOIUrl":"https://doi.org/10.1145/1117309.1117337","url":null,"abstract":"Previous studies have shown that, when presented in rapid stream (i.e., RSVP paradigm), word recognition speed for strings of three-letter words increases approximately 10% with large letter spacing, both in the fovea and the periphery (up to 10° eccentricity). A possible explanation is that small spacing may cause features of individual characters to overlap with one another, thus reducing text legibility, impeding letter and word recognition, and slowing down the reading process. In contrast, increasing letter spacing reduced the crowding effect until it was so wide that the word shape information is disrupted, or extends beyond the visual span, and thus slows down the reading.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126155375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using the eyes to encode and recognize social scenes 用眼睛来编码和识别社会场景
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117320
Elina Birmingham, W. Bischof, A. Kingstone
{"title":"Using the eyes to encode and recognize social scenes","authors":"Elina Birmingham, W. Bischof, A. Kingstone","doi":"10.1145/1117309.1117320","DOIUrl":"https://doi.org/10.1145/1117309.1117320","url":null,"abstract":"In a previous study, we found that observers look mostly at the eyes when viewing natural scenes containing one or more people (Birmingham et al. submitted). This prioritization of eye regions occurred regardless of the type of scene being viewed (e.g. scenes with one person vs. scenes with several people, see Figure 1). The finding that observers attend preferentially to the eyes when freely viewing scenes suggests that they are the most informative regions of the scene. As a consequence, one might also expect that observers encode and/or recognize scenes through information from the eyes. This prediction is in line with the finding that when viewing object scenes in preparation for a later memory test, observers tend to fixate more informative objects more frequently than less informative objects (Henderson et al. 1999).","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116416351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze alignment of interlocutors in conversational dialogues 对话性对话中对话者的凝视对齐
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117322
K. Hadelich, M. Crocker
{"title":"Gaze alignment of interlocutors in conversational dialogues","authors":"K. Hadelich, M. Crocker","doi":"10.1145/1117309.1117322","DOIUrl":"https://doi.org/10.1145/1117309.1117322","url":null,"abstract":"In the area of Psycholinguistics, eye-tracking has been a successful and valuable tool for the investigation of on-line processes of language comprehension and language production (e.g., [Griffin and Bock 2000], [Tanenhaus et al. 1995]). However, the application of eye-tracking to the investigation of mechanisms underlying more naturalistic language use, e.g. dialogue, has so far been limited to the examination of eye-movements of either the speaker or the listener in isolation (e.g., [Brown-Schmidt et al. 2005]; [Richardson and Dale 2004]). Even offline dialogue experiments investigating, e.g. priming effects, usually involve only one \"real\" participant while their interlocutor is a confederate of the experimenter. In order to test predictions coming from dialogue models (e.g., [Pickering and Garrod 2004]) and in order to provide the kinds of evidence necessary for their further development, experimental methods that directly examine behaviour of participants actually engaged in a conversation are needed. Additionally, eye-tracking measures established in psycholinguistic monologue research need to be compared with their dialogue processing counterparts. Furthermore, new measures describing the relation between speaker and listener eye-movements in communication are needed, as they can give rise to the language mechanisms underlying conversational interaction.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114997519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Eye movements and motor programming in a Time-To-Contact task 接触时间任务中的眼球运动和运动编程
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117339
E. Morya, M. Bertolassi, A. P. Filho, C. Morimoto, R. Ranvaud
{"title":"Eye movements and motor programming in a Time-To-Contact task","authors":"E. Morya, M. Bertolassi, A. P. Filho, C. Morimoto, R. Ranvaud","doi":"10.1145/1117309.1117339","DOIUrl":"https://doi.org/10.1145/1117309.1117339","url":null,"abstract":"In previous experiments investigating motor control in a Time-To-Contact task [Morya et al., 2003], events occurring 400-600 ms prior to contact (but not earlier or later) caused volunteers to anticipate their estimate of when contact occurred. Many such mislocalization or mistiming effects have been discussed in the literature [Nijhuan, 1994; van Beers et al. 2001]. In preliminary eye-tracking experiments [Morya et al. 2004], with a simplified version of the task, involuntary shifts in gaze suggested the presence of attentional shifts as volunteers prepared to respond, that might be associated with their anticipations. To better understand the factors involved in these observations, gaze was sistematically recorded changing the speed of the moving target, and with different instructions as to where the volunteers should look as they performed the Time-To-Contact task.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130059042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative usability study of two Japanese gaze typing systems 两种日本注视类型系统的可用性比较研究
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117344
K. Itoh, Hirotaka Aoki, J. P. Hansen
{"title":"A comparative usability study of two Japanese gaze typing systems","authors":"K. Itoh, Hirotaka Aoki, J. P. Hansen","doi":"10.1145/1117309.1117344","DOIUrl":"https://doi.org/10.1145/1117309.1117344","url":null,"abstract":"The complex interplay between gaze tracker accuracy and interface design is the focus of this paper. Two slightly different variants of GazeTalk, a hierarchical typing interface, were contrasted with a novel interface, Dasher, in which text entry is done by continuous navigation. All of the interfaces were tested with a good and a deliberate bad calibration of the tracker. The purpose was to investigate, if performance indices normally used for evaluation of typing systems, such as characters per minute (CPM) and error-rate, could differentiate between the conditions, and thus guide an iterative system development of both trackers and interfaces. Gaze typing with one version of the static, hierarchical menu systems was slightly faster than the others. Error measures, in terms of rate of backspacing, were also significantly different for the systems, while the deliberate bad tracker calibrations did not have any measurable effect. Learning effects were evident under all conditions. Power-law-of-practice learning models suggested that Dasher might be more efficient than GazeTalk in the long run.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130253069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
openEyes: a low-cost head-mounted eye-tracking solution openEyes:一种低成本的头戴式眼球追踪解决方案
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117350
Dongheng Li, J. Babcock, Derrick J. Parkhurst
{"title":"openEyes: a low-cost head-mounted eye-tracking solution","authors":"Dongheng Li, J. Babcock, Derrick J. Parkhurst","doi":"10.1145/1117309.1117350","DOIUrl":"https://doi.org/10.1145/1117309.1117350","url":null,"abstract":"Eye tracking has long held the promise of being a useful methodology for human computer interaction. However, a number of barriers have stood in the way of the integration of eye tracking into everyday applications, including the intrusiveness, robustness, availability, and price of eye-tracking systems. To lower these barriers, we have developed the openEyes system. The system consists of an open-hardware design for a digital eye tracker that can be built from low-cost off-the-shelf components, and a set of open-source software tools for digital image capture, manipulation, and analysis in eye-tracking applications. We expect that the availability of this system will facilitate the development of eye-tracking applications and the eventual integration of eye tracking into the next generation of everyday human computer interfaces. We discuss the methods and technical challenges of low-cost eye tracking as well as the design decisions that produced our current system.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126258777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 269
Causal saliency effects during natural vision 自然视觉中的因果显著效应
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117313
R. Carmi, L. Itti
{"title":"Causal saliency effects during natural vision","authors":"R. Carmi, L. Itti","doi":"10.1145/1117309.1117313","DOIUrl":"https://doi.org/10.1145/1117309.1117313","url":null,"abstract":"Salient stimuli, such as color or motion contrasts, attract human attention, thus providing a fast heuristic for focusing limited neural resources on behaviorally relevant sensory inputs. Here we address the following questions: What types of saliency attract attention and how do they compare to each other during natural vision? We asked human participants to inspect scene-shuffled video clips, tracked their instantaneous eye-position, and quantified how well a battery of computational saliency models predicted overt attentional selections (saccades). Saliency effects were measured as a function of total viewing time, proximity to abrupt scene transitions (jump cuts), and inter-participant consistency. All saliency models predicted overall attentional selection well above chance, with dynamic models being equally predictive to each other, and up to 3.6 times more predictive than static models. The prediction accuracy of all dynamic models was twice higher than their average for saccades that were initiated immediately after jump cuts, and led to maximal inter-participant consistency. Static models showed mixed results in these circumstances, with some models having weaker prediction accuracy than their average. These results demonstrate that dynamic visual cues play a dominant causal role in attracting attention, while static visual cues correlate with attentional selection mostly due to top-down causes.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123809061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
One-point calibration gaze tracking method 单点标定注视跟踪方法
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117318
Takehiko Ohno
{"title":"One-point calibration gaze tracking method","authors":"Takehiko Ohno","doi":"10.1145/1117309.1117318","DOIUrl":"https://doi.org/10.1145/1117309.1117318","url":null,"abstract":"A novel gaze tracking method that requires only one calibration marker for personal calibration is proposed. In general, personal calibration is known to be a troublesome task, which requires the user looks at nine to twenty calibration markers in succession. Unlike traditional methods, the proposed method drastically reduces the cost of personal calibration. This method, which is called the One-Point Calibration method (OPC method), requires only one calibration marker for the personal calibration. While the user looks at the calibration marker, the difference between the user's eyeball shape and the eyeball model used in calculating the user's gaze direction is estimated, and residual error is compensated by the parameters derived by the calibration.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114701332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
A pivotable head mounted camera system that is aligned by three-dimensional eye movements 一种可透视的头部摄像机系统,通过三维眼球运动来对齐
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117354
P. Wagner, K. Bartl, W. Günthner, E. Schneider, T. Brandt, H. Ulbrich
{"title":"A pivotable head mounted camera system that is aligned by three-dimensional eye movements","authors":"P. Wagner, K. Bartl, W. Günthner, E. Schneider, T. Brandt, H. Ulbrich","doi":"10.1145/1117309.1117354","DOIUrl":"https://doi.org/10.1145/1117309.1117354","url":null,"abstract":"The first proof of concept of an eye movement driven head camera system was recently presented. This innovative device utilized voluntary and reflexive eye movements, which were registered by video-oculography and computed online, as signals to drive servo motors which then aligned the camera along the user's gaze direction. However, with just two degrees of freedom, this camera motion device could not compensate for roll motions around the optical axis of the system. Therefore a new three-degree-of-freedom camera motion device that is able to reproduce the whole range of possible eye movements has now been implemented. In doing so, it allows a freely mobile user to aim the optical axis of the head mounted camera system at the target(s) in the visual field at which he/she is looking, while the ocular reflexes minimize image shaking by naturally counter-rolling the \"gaze in space\" of the camera during head and visual scene movements as well as during locomotion. A camera guided in this way mimics the natural exploration of a visual scene and acquires video sequences from the perspective of a mobile user, while the oculomotor reflexes naturally stabilize the camera on target during head and target movements. Various documentation and teaching applications in health care, industry, and research are conceivable. This work presents the implementation of the new camera motion device and its, integration into a head camera setup including the eye tracking device.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114876683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Mobile eye tracking as a basis for real-time control of a gaze driven head-mounted video camera 移动眼动追踪作为实时控制注视驱动头戴式摄像机的基础
Proceedings of the 2006 symposium on Eye tracking research & applications Pub Date : 2006-03-27 DOI: 10.1145/1117309.1117341
G. Böning, K. Bartl, T. Dera, S. Bardins, E. Schneider, T. Brandt
{"title":"Mobile eye tracking as a basis for real-time control of a gaze driven head-mounted video camera","authors":"G. Böning, K. Bartl, T. Dera, S. Bardins, E. Schneider, T. Brandt","doi":"10.1145/1117309.1117341","DOIUrl":"https://doi.org/10.1145/1117309.1117341","url":null,"abstract":"Eye trackers based on video-oculographic (VOG) methods are a convenient means for oculomotor research. This work focused on the development of a VOG device that allows mobile eye tracking. It was especially designed to support a head-mounted gaze driven camera system presented in a companion paper [Wagner et al. 2006] (see Figure 1). The target applications of such a device can be seen in surgery, medical and behavioral sciences as well as in the documentation and teaching of manual tasks. One major aim was the design of a lightweight head mount at low costs. Since the actuators of the gaze camera require close-to-real-time control, the software was implemented on standard PC-hardware by using well-known VOG algorithms that were optimized for short latencies.","PeriodicalId":440675,"journal":{"name":"Proceedings of the 2006 symposium on Eye tracking research & applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124366016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信