{"title":"Understanding Game Roles and Strategy Using a Mixed Methods Approach","authors":"Kaitlyn M. Roose, Elizabeth S. Veinott","doi":"10.1145/3448018.3458006","DOIUrl":"https://doi.org/10.1145/3448018.3458006","url":null,"abstract":"In this paper, we use the Tracer Method to examine a complex and team-oriented, first-person shooter game to determine how the output can better inform Esports training. The Tracer Method combines eye tracking with Critical Decision Method to focus the analyses on the critical aspects of gameplay, while providing insight into the most frequent visual search transitions across game areas of interest. We examined the differences across three in-game roles and three decision types (strategic, operational, and tactical) using network centrality diagrams and entropy measures. No differences in overall stationary entropy were found for either role or decision type. However, each game role and decision type produced a different network centrality diagram, indicating different visual search transitions, which could support training of Esport players.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128438312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanjana Ramanujam, Christian Sinnott, Bharath Shankar, Savannah Halow, Brian Szekely, P. MacNeilage, Kamran Binaee
{"title":"VEDBViz: The Visual Experience Database Visualization and Interaction Tool","authors":"Sanjana Ramanujam, Christian Sinnott, Bharath Shankar, Savannah Halow, Brian Szekely, P. MacNeilage, Kamran Binaee","doi":"10.1145/3450341.3458486","DOIUrl":"https://doi.org/10.1145/3450341.3458486","url":null,"abstract":"Mobile, simultaneous tracking of both the head and eyes is typically achieved through integration of separate head and eye tracking systems because off-the-shelf solutions do not yet exist. Similarly, joint visualization and analysis of head and eye movement data is not possible with standard software packages because these were designed to support either head or eye tracking in isolation. Thus, there is a need for software that supports joint analysis of head and eye data to characterize and investigate topics including head-eye coordination and reconstruction of how the eye is moving in space. To address this need, we have begun developing VEDBViz which supports simultaneous graphing and animation of head and eye movement data recorded with the Intel RealSense T265 and Pupil Core, respectively. We describe current functionality as well as features and applications that are still in development.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130266572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omair Shahzad Bhatti, Michael Barz, Daniel Sonntag
{"title":"EyeLogin - Calibration-free Authentication Method for Public Displays Using Eye Gaze","authors":"Omair Shahzad Bhatti, Michael Barz, Daniel Sonntag","doi":"10.1145/3448018.3458001","DOIUrl":"https://doi.org/10.1145/3448018.3458001","url":null,"abstract":"The usage of interactive public displays has increased including the number of sensitive applications and, hence, the demand for user authentication methods. In this context, gaze-based authentication was shown to be effective and more secure, but significantly slower than touch- or gesture-based methods. We implement a calibration-free and fast authentication method for situated displays based on saccadic eye movements. In a user study (n = 10), we compare our new method with CueAuth from Khamis et al. (IMWUT’18), an authentication method based on smooth pursuit eye movements. The results show a significant improvement in accuracy from 82.94% to 95.88%. At the same time, we found that the entry speed can be increased enormously with our method, on average, 18.28s down to 5.12s, which is comparable to touch-based input.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131314758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimizing Cognitive Load in Cyber Learning Materials – An Eye Tracking Study","authors":"Leon Bernard, Sagar Raina, Blair Taylor, S. Kaza","doi":"10.1145/3448018.3458617","DOIUrl":"https://doi.org/10.1145/3448018.3458617","url":null,"abstract":"Cybersecurity education is critical in addressing the global cyber crisis. However, cybersecurity is inherently complex and teaching cyber can lead to cognitive overload among students. Cognitive load includes: 1) intrinsic load (IL- due to inherent difficulty of the topic), 2) extraneous (EL- due to presentation of material), and 3) germane (GL- due to extra effort put in for learning). The challenge is to minimize IL and EL and maximize GL. We propose a model to develop cybersecurity learning materials that incorporate both the Bloom's taxonomy cognitive framework and the design principles of content segmentation and interactivity. We conducted a randomized control/treatment group study to test the proposed model by measuring cognitive load using two eye-tracking metrics (fixation duration and pupil size) between two cybersecurity learning modalities – 1) segmented and interactive modules, and 2) traditional-without segmentation and interactivity (control). Nineteen computer science majors in a large comprehensive university participated in the study and completed a learning module focused on integer overflow in a popular programming language.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122564616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken Pfeuffer, Jason Alexander, Hans-Werner Gellersen
{"title":"Multi-user Gaze-based Interaction Techniques on Collaborative Touchscreens","authors":"Ken Pfeuffer, Jason Alexander, Hans-Werner Gellersen","doi":"10.1145/3448018.3458016","DOIUrl":"https://doi.org/10.1145/3448018.3458016","url":null,"abstract":"Eye-gaze is a technology for implicit, fast, and hands-free input for a variety of use cases, with the majority of techniques focusing on single-user contexts. In this work, we present an exploration into gaze techniques of users interacting together on the same surface. We explore interaction concepts that exploit two states in an interactive system: 1) users visually attending to the same object in the UI, or 2) users focusing on separate targets. Interfaces can exploit these states with increasing availability of eye-tracking. For example, to dynamically personalise content on the UI to each user, and to provide a merged or compromised view on an object when both users’ gaze are falling upon it. These concepts are explored with a prototype horizontal interface that tracks gaze of two users facing each other. We build three applications that illustrate different mappings of gaze to multi-user support: an indoor map with gaze-highlighted information, an interactive tree-of-life visualisation that dynamically expands on users’ gaze, and a worldmap application with gaze-aware fisheye zooming. We conclude with insights from a public deployment of this system, pointing toward the engaging and seamless ways how eye based input integrates into collaborative interaction.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126609560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brendan David-John, C. Peacock, Ting Zhang, T. Scott Murdison, Hrvoje Benko, Tanya R. Jonker
{"title":"Towards gaze-based prediction of the intent to interact in virtual reality","authors":"Brendan David-John, C. Peacock, Ting Zhang, T. Scott Murdison, Hrvoje Benko, Tanya R. Jonker","doi":"10.1145/3448018.3458008","DOIUrl":"https://doi.org/10.1145/3448018.3458008","url":null,"abstract":"With the increasing frequency of eye tracking in consumer products, including head-mounted augmented and virtual reality displays, gaze-based models have the potential to predict user intent and unlock intuitive new interaction schemes. In the present work, we explored whether gaze dynamics can predict when a user intends to interact with the real or digital world, which could be used to develop predictive interfaces for low-effort input. Eye-tracking data were collected from 15 participants performing an item-selection task in virtual reality. Using logistic regression, we demonstrated successful prediction of the onset of item selection. The most prevalent predictive features in the model were gaze velocity, ambient/focal attention, and saccade dynamics, demonstrating that gaze features typically used to characterize visual attention can be applied to model interaction intent. In the future, these types of models can be used to infer user’s near-term interaction goals and drive ultra-low-friction predictive interfaces.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120911646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye Tracking Calibration on Mobile Devices","authors":"Yaxiong Lei","doi":"10.1145/3450341.3457989","DOIUrl":"https://doi.org/10.1145/3450341.3457989","url":null,"abstract":"Eye tracking has been widely used in psychology, human-computer interaction and many other fields. Recently, eye tracking based on off-the-shelf cameras has produced promising results, compared to the traditional eye tracking devices. This presents an opportunity to introduce eye tracking on mobile devices. However, eye tracking on mobile devices face many challenges, including occlusion of faces and unstable and changing distance between face and camera. This research project aims to obtain stable and accurate calibration of front-camera based eye tracking in dynamic contexts through the construction of real-world eye-movement datasets, the introduction of novel context-awareness models and improved gaze estimation methods that can be adapted to partial faces.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132906889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of reading ability of program codes using features of eye movements","authors":"Hiroto Harada, M. Nakayama","doi":"10.1145/3448018.3457421","DOIUrl":"https://doi.org/10.1145/3448018.3457421","url":null,"abstract":"A prediction model for code reading ability using eye movement features was developed, and analysed in order to evaluate reader’s level of mastery and provide appropriate support. Sixty-nine features were extracted from eye movements during the reading of two program codes. These codes consisted of three areas of interest (AOIs) that were modules of code which performed 3 functions. Also, code reader’s performance ability was estimated using responses to question surveys and item response theory. The relationships between estimated ability and the metrics of eye movements were generated using a support vector regression technique. Factors of the extracted metrics were analysed. These results confirm the relationship between code comprehension reading behaviour and reading comprehension performance.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"34 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114119344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Climate change overlooked. The role of attitudes and mood regulation in visual attention to global warming","authors":"Anna Mazurowska","doi":"10.1145/3450341.3457991","DOIUrl":"https://doi.org/10.1145/3450341.3457991","url":null,"abstract":"Why, in the face of climate catastrophe, do people still seem to underestimate the weight of the threat without taking adequate action to fight global warming? Among many reasons for this, the current study aims to dive into people’s cognitive abilities and explore the barriers located at the individual level, using an eye-tracking methodology. Previous findings indicate that a pro-environmental attitude does not necessarily lead to pro-environmental behavior. What may stand in the way is ignorance that can be mediated by other factors. This study will examine whether visual distraction from images depicting the impacts of climate change is mediated by mood regulation and environmental concern. This will help to fit educational and information materials to specific viewers, which may result in more pro-environmental behaviors in the future.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114878880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REyeker: Remote Eye Tracker","authors":"Jonas Mucke, Marc Schwarzkopf, J. Siegmund","doi":"10.1145/3448018.3457423","DOIUrl":"https://doi.org/10.1145/3448018.3457423","url":null,"abstract":"Eye tracking allows us to shed light on how developers read and understand source code and how that is linked to cognitive processes. However, studies with eye trackers are usually tied to a laboratory, requiring to observe participants one at a time, which is especially challenging in the current pandemic. To allow for safe and parallel observation, we present our tool REyeker, which allows researchers to observe developers remotely while they understand source code from their own computer without having to directly interact with the experimenter. The original image is blurred to distort text regions and disable legibility, requiring participants to click on areas of interest to deblur them to make them readable. While REyeker naturally can only track eye movements to a limited degree, it allows researchers to get a basic understanding of developers’ reading behavior.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"121 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124791532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}