Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications最新文献

筛选
英文 中文
GazeButton GazeButton
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3318154
S. Rivu, Yasmeen Abdrabou, Thomas Mayer, Ken Pfeuffer, Florian Alt
{"title":"GazeButton","authors":"S. Rivu, Yasmeen Abdrabou, Thomas Mayer, Ken Pfeuffer, Florian Alt","doi":"10.1145/3314111.3318154","DOIUrl":"https://doi.org/10.1145/3314111.3318154","url":null,"abstract":"","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132332229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Image, brand and price info: do they always matter the same? 形象、品牌和价格信息:它们总是同样重要吗?
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317960.3321616
Mónica Cortiñas, Raquel Chocarro, A. Villanueva
{"title":"Image, brand and price info: do they always matter the same?","authors":"Mónica Cortiñas, Raquel Chocarro, A. Villanueva","doi":"10.1145/3317960.3321616","DOIUrl":"https://doi.org/10.1145/3317960.3321616","url":null,"abstract":"We study attention processes to brand, price and visual information about products in online retailing websites, simultaneously considering the effects of consumers' goals, purchase category and consumers' statements. We use an intra-subject experimental design, simulated web stores and a combination of observational eye-tracking data and declarative measures. Image information about the product is the more important stimulus, regardless of the task at hand or the store involved. The roles of brand and price information are dependent on the product category and the purchase task involved. Declarative measures of relative brand importance are found to be positively related with its observed importance.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133996982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Task-embedded online eye-tracker calibration for improving robustness to head motion 任务嵌入式在线眼动仪校准提高头部运动鲁棒性
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319845
Jimin Pi, Bertram E. Shi
{"title":"Task-embedded online eye-tracker calibration for improving robustness to head motion","authors":"Jimin Pi, Bertram E. Shi","doi":"10.1145/3314111.3319845","DOIUrl":"https://doi.org/10.1145/3314111.3319845","url":null,"abstract":"Remote eye trackers are widely used for screen-based interactions. They are less intrusive than head mounted eye trackers, but are generally quite sensitive to head movement. This leads to the requirement for frequent recalibration, especially in applications requiring accurate eye tracking. We propose here an online calibration method to compensate for head movements if estimates of the gaze targets are available. For example, in dwell-time based gaze typing it is reasonable to assume that for correct selections, the user's gaze target during the dwell-time was at the key center. We use this assumption to derive an eye-position dependent linear transformation matrix for correcting the measured gaze. Our experiments show that the proposed method significantly reduces errors over a large range of head movements.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124420863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gaze behaviour on interacted objects during hand interaction in virtual reality for eye tracking calibration 虚拟现实中手交互过程中被交互对象的注视行为,用于眼动追踪标定
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319815
Ludwig Sidenmark, Anders Lundström
{"title":"Gaze behaviour on interacted objects during hand interaction in virtual reality for eye tracking calibration","authors":"Ludwig Sidenmark, Anders Lundström","doi":"10.1145/3314111.3319815","DOIUrl":"https://doi.org/10.1145/3314111.3319815","url":null,"abstract":"In this paper, we investigate the probability and timing of attaining gaze fixations on interacted objects during hand interaction in virtual reality, with the main purpose for implicit and continuous eye tracking re-calibration. We conducted an evaluation with 15 participants in which their gaze was recorded while interacting with virtual objects. The data was analysed to find factors influencing the probability of fixations at different phases of interaction for different object types. The results indicate that 1) interacting with stationary objects may be favourable in attaining fixations to moving objects, 2) prolonged and precision-demanding interactions positively influences the probability to attain fixations, 3) performing multiple interactions simultaneously can negatively impact the probability of fixations, and 4) feedback can initiate and end fixations on objects.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125892324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Finding the outliers in scanpath data 查找扫描路径数据中的异常值
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317958.3318225
Michael Burch, Ayush Kumar, K. Mueller, Titus Kervezee, Wouter W. L. Nuijten, Rens Oostenbach, Lucas Peeters, Gijs Smit
{"title":"Finding the outliers in scanpath data","authors":"Michael Burch, Ayush Kumar, K. Mueller, Titus Kervezee, Wouter W. L. Nuijten, Rens Oostenbach, Lucas Peeters, Gijs Smit","doi":"10.1145/3317958.3318225","DOIUrl":"https://doi.org/10.1145/3317958.3318225","url":null,"abstract":"In this paper, we describe the design of an interactive visualization tool for the comparison of eye movement data with a special focus on the outliers. In order to make the tool usable and accessible to anyone with a data science background, we provide a web-based solution by using the Dash library based on the Python programming language and the Python library Plotly. Interactive visualization is very well supported by Dash, which makes the visualization tool easy to use. We support multiple ways of comparing user scanpaths like bounding boxes and Jaccard indices to identify similarities. Moreover, we support matrix reordering to clearly separate the outliers in the scanpaths. We further support the data analyst by complementary views such as gaze plots and visual attention maps.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125932071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
GeoGCD: improved visual search via gaze-contingent display GeoGCD:通过视景显示改进视觉搜索
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317959.3321488
K. Bektaş, A. Çöltekin, J. Krüger, A. Duchowski, S. Fabrikant
{"title":"GeoGCD: improved visual search via gaze-contingent display","authors":"K. Bektaş, A. Çöltekin, J. Krüger, A. Duchowski, S. Fabrikant","doi":"10.1145/3317959.3321488","DOIUrl":"https://doi.org/10.1145/3317959.3321488","url":null,"abstract":"Gaze-Contingent Displays (GCDs) can improve visual search performance on large displays. GCDs, a Level Of Detail (LOD) management technique, discards redundant peripheral detail using various human visual perception models. Models of depth and contrast perception (e.g., depth-of-field and foveation) have often been studied to address the trade-off between the computational and perceptual benefits of GCDs. However, color perception models and combinations of multiple models have not received as much attention. In this paper, we present GeoGCD which uses individual contrast, color, and depth-perception models, and their combination to render scenes without perceptible latency. As proof-of-concept, we present a three-stage user evaluation built upon geographic image interpretation tasks. GeoGCD does not impair users' visual search performance or affect their display preferences. On the contrary, in some cases, it can significantly improve users' performance.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130654555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Towards a better description of visual exploration through temporal dynamic of ambient and focal modes 通过环境和焦点模式的时间动态来更好地描述视觉探索
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3323075
Alexandre Milisavljevic, Thomas Le Bras, M. Mancas, Coralie Petermann, B. Gosselin, K. Doré-Mazars
{"title":"Towards a better description of visual exploration through temporal dynamic of ambient and focal modes","authors":"Alexandre Milisavljevic, Thomas Le Bras, M. Mancas, Coralie Petermann, B. Gosselin, K. Doré-Mazars","doi":"10.1145/3314111.3323075","DOIUrl":"https://doi.org/10.1145/3314111.3323075","url":null,"abstract":"Human eye movements are far from being well described with current indicators. From the dataset provided by the ETRA 2019 challenge, we analyzed saccades and fixations during a free exploration of blank or natural scenes and during visual search. Based on the two modes of exploration, ambient and focal, we used the K coefficient [Krejtz et al. 2016]. We failed to find any differences between tasks but this indicator gives only the dominant mode over the entire recording. The stability of both modes, assesses with the switch frequency and the mode duration allowed to differentiate gaze behavior according to situations. Time course analyses of K coefficient and switch frequency corroborate that the latter is a useful indicator, describing a greater portion of the eye movement recording.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130756120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analyzing gaze transition behavior using bayesian mixed effects Markov models 利用贝叶斯混合效应马尔可夫模型分析凝视转移行为
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319839
Islam Akef Ebeid, Nilavra Bhattacharya, J. Gwizdka, A. Sarkar
{"title":"Analyzing gaze transition behavior using bayesian mixed effects Markov models","authors":"Islam Akef Ebeid, Nilavra Bhattacharya, J. Gwizdka, A. Sarkar","doi":"10.1145/3314111.3319839","DOIUrl":"https://doi.org/10.1145/3314111.3319839","url":null,"abstract":"The complex stochastic nature of eye tracking data calls for exploring sophisticated statistical models to ensure reliable inference in multi-trial eye-tracking experiments. We employ a Bayesian semi-parametric mixed-effects Markov model to compare gaze transition matrices between different experimental factors accommodating individual random effects. The model not only allows us to assess global influences of the external factors on the gaze transition dynamics but also provides comprehension of these effects at a deeper local level. We experimented to explore the impact of recognizing distorted images of artwork and landmarks on the gaze transition patterns. Our dataset comprises sequences representing areas of interest visited when applying a content independent grid to the resulting scan paths in a multi-trial setting. Results suggest that image recognition to some extent affects the dynamics of the transitions while image type played an essential role in the viewing behavior.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132080998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Power-efficient and shift-robust eye-tracking sensor for portable VR headsets 用于便携式VR头戴式设备的高能效、换挡稳健性眼动追踪传感器
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319821
Dmytro Katrychuk, Henry K. Griffith, Oleg V. Komogortsev
{"title":"Power-efficient and shift-robust eye-tracking sensor for portable VR headsets","authors":"Dmytro Katrychuk, Henry K. Griffith, Oleg V. Komogortsev","doi":"10.1145/3314111.3319821","DOIUrl":"https://doi.org/10.1145/3314111.3319821","url":null,"abstract":"Photosensor oculography (PSOG) is a promising solution for reducing the computational requirements of eye tracking sensors in wireless virtual and augmented reality platforms. This paper proposes a novel machine learning-based solution for addressing the known performance degradation of PSOG devices in the presence of sensor shifts. Namely, we introduce a convolutional neural network model capable of providing shift-robust end-to-end gaze estimates from the PSOG array output. Moreover, we propose a transfer-learning strategy for reducing model training time. Using a simulated workflow with improved realism, we show that the proposed convolutional model offers improved accuracy over a previously considered multilayer perceptron approach. In addition, we demonstrate that the transfer of initialization weights from pre-trained models can substantially reduce training time for new users. In the end, we provide the discussion regarding the design trade-offs between accuracy, training time, and power consumption among the considered models.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125374127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
SeTA
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319830
Andoni Larumbe-Bergera, Sonia Porta, Rafael Cabeza, A. Villanueva
{"title":"SeTA","authors":"Andoni Larumbe-Bergera, Sonia Porta, Rafael Cabeza, A. Villanueva","doi":"10.1145/3314111.3319830","DOIUrl":"https://doi.org/10.1145/3314111.3319830","url":null,"abstract":"Availability of large scale tagged datasets is a must in the field of deep learning applied to the eye tracking challenge. In this paper, the potential of Supervised-Descent-Method (SDM) as a semiautomatic labelling tool for eye tracking images is shown. The objective of the paper is to evidence how the human effort needed for manually labelling large eye tracking datasets can be radically reduced by the use of cascaded regressors. Different applications are provided in the fields of high and low resolution systems. An iris/pupil center labelling is shown as example for low resolution images while a pupil contour points detection is demonstrated in high resolution. In both cases manual annotation requirements are drastically reduced.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120953500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信