Soha Rostaminia, Alexander Lamson, Subhransu Maji, Tauhidur Rahman, Deepak Ganesan
{"title":"W!NCE: eyewear solution for upper face action units monitoring","authors":"Soha Rostaminia, Alexander Lamson, Subhransu Maji, Tauhidur Rahman, Deepak Ganesan","doi":"10.1145/3314111.3322501","DOIUrl":"https://doi.org/10.1145/3314111.3322501","url":null,"abstract":"The ability to unobtrusively and continuously monitor one's facial expressions has implications for a variety of application domains ranging from affective computing to health-care and the entertainment industry The standard Facial Action Coding System (FACS) along with camera based methods have been shown to provide objective indicators of facial expressions; however, these approaches can also be fairly limited for mobile applications due to privacy concerns and awkward positioning of the camera. To bridge this gap, W!NCE re-purposes a commercially available Electrooculography-based eyeglass (J!NS MEME) for continuously and unobtrusively sensing of upper facial action units with high fidelity. W!NCE detects facial gestures using a two-stage processing pipeline involving motion artifact removal and facial action detection. We validate our system's applicability through extensive evaluation on data from 17 users under stationary and ambulatory settings.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129111547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a data-driven framework for realistic self-organized virtual humans: coordinated head and eye movements","authors":"Zhizhuo Yang, Reynold J. Bailey","doi":"10.1145/3314111.3322874","DOIUrl":"https://doi.org/10.1145/3314111.3322874","url":null,"abstract":"Driven by significant investments from the gaming, film, advertising, and customer service industries among others, efforts across many different fields are converging to create realistic representations of humans that look like (computer graphics), sound like (natural language generation), move like (motion capture), and reason like (artificial intelligence) real humans. The ultimate goal of this work is to push the boundaries even further by exploring the development of realistic self-organized virtual humans that are capable of demonstrating coordinated behaviors across different modalities. Eye movements, for example, may be accompanied by changes in facial expression, head orientation, posture, gait properties, or speech. Traditionally however, these modalities are captured and modeled separately and this disconnect contributes to the well-known uncanny valley phenomenon. We focus initially on facial modalities, in particular, coordinated eye and head movements (and eventually facial expressions), but our proposed data-driven framework will be able to accommodate other modalities as well. transfer [Laine et al. 2017]. Despite these advances, the resulting renderings or animations are often still distinguishable from a real human, sometimes in unsettling ways - the so called uncanny valley phenomenon [Mori et al. 2012]. We argue that the traditional approach of capturing and modeling various human modalities separately contributes this effect. In this work, we focus on capturing, transferring, and generating realistic coordinated facial modalities (eye movements, head movements, and eventually facial expressions). We envision a flexible framework that can be extended to accommodate other modalities as well.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128387257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze awareness improves collaboration efficiency in a collaborative assembly task","authors":"Haofei Wang, Bertram E. Shi","doi":"10.1145/3317959.3321492","DOIUrl":"https://doi.org/10.1145/3317959.3321492","url":null,"abstract":"In building human robot interaction systems, it would be helpful to understand how humans collaborate, and in particular, how humans use others' gaze behavior to estimate their intent. Here we studied the use of gaze in a collaborative assembly task, where a human user assembled an object with the assistance of a human helper. We found that the being aware of the partner's gaze significantly improved collaboration efficiency. Task completion times were much shorter when gaze communication was available, than when it was blocked. In addition, we found that the user's gaze was more likely to lie on the object of interest in the gaze-aware case than the gaze-blocked case. In the context of human-robot collaboration systems, our results suggest that gaze data in the period surrounding verbal requests will be more informative and can be used to predict the target object.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126568518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin Le Louëdec, Thomas Guntz, J. Crowley, D. Vaufreydaz
{"title":"Deep learning investigation for chess player attention prediction using eye-tracking and game data","authors":"Justin Le Louëdec, Thomas Guntz, J. Crowley, D. Vaufreydaz","doi":"10.1145/3314111.3319827","DOIUrl":"https://doi.org/10.1145/3314111.3319827","url":null,"abstract":"This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124151228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ao Liu, Lirong Xia, A. Duchowski, Reynold J. Bailey, K. Holmqvist, Eakta Jain
{"title":"Differential privacy for eye-tracking data","authors":"Ao Liu, Lirong Xia, A. Duchowski, Reynold J. Bailey, K. Holmqvist, Eakta Jain","doi":"10.1145/3314111.3319823","DOIUrl":"https://doi.org/10.1145/3314111.3319823","url":null,"abstract":"As large eye-tracking datasets are created, data privacy is a pressing concern for the eye-tracking community. De-identifying data does not guarantee privacy because multiple datasets can be linked for inferences. A common belief is that aggregating individuals' data into composite representations such as heatmaps protects the individual. However, we analytically examine the privacy of (noise-free) heatmaps and show that they do not guarantee privacy. We further propose two noise mechanisms that guarantee privacy and analyze their privacy-utility tradeoff. Analysis reveals that our Gaussian noise mechanism is an elegant solution to preserve privacy for heatmaps. Our results have implications for interdisciplinary research to create differentially private mechanisms for eye tracking.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131395239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nelson Silva, Tanja Blascheck, R. Jianu, Nils Rodrigues, D. Weiskopf, M. Raubal, T. Schreck
{"title":"Eye tracking support for visual analytics systems: foundations, current applications, and research challenges","authors":"Nelson Silva, Tanja Blascheck, R. Jianu, Nils Rodrigues, D. Weiskopf, M. Raubal, T. Schreck","doi":"10.1145/3314111.3319919","DOIUrl":"https://doi.org/10.1145/3314111.3319919","url":null,"abstract":"Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complex datasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches. Therefore, we discuss foundations for eye tracking support in VA systems. We first review and discuss the structure and range of typical VA systems. Based on a widely used VA model, we present five comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used to systematically explore how concrete VA systems could be extended with eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and application opportunities, and classify them into research themes. In a call for action, we map the road for future research to broaden the use of eye tracking and advance visual analytics.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian Steil, Inken Hagestedt, Michael Xuelin Huang, A. Bulling
{"title":"Privacy-aware eye tracking using differential privacy","authors":"Julian Steil, Inken Hagestedt, Michael Xuelin Huang, A. Bulling","doi":"10.1145/3314111.3319915","DOIUrl":"https://doi.org/10.1145/3314111.3319915","url":null,"abstract":"With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users' privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122698286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian Steil, Marion Koelle, Wilko Heuten, Susanne CJ Boll, A. Bulling
{"title":"PrivacEye: privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features","authors":"Julian Steil, Marion Koelle, Wilko Heuten, Susanne CJ Boll, A. Bulling","doi":"10.1145/3314111.3319913","DOIUrl":"https://doi.org/10.1145/3314111.3319913","url":null,"abstract":"Eyewear devices, such as augmented reality displays, increasingly integrate eye tracking, but the first-person camera required to map a user's gaze to the visual scene can pose a significant threat to user and bystander privacy. We present PrivacEye, a method to detect privacy-sensitive everyday situations and automatically enable and disable the eye tracker's first-person camera using a mechanical shutter. To close the shutter in privacy-sensitive situations, the method uses a deep representation of the first-person video combined with rich features that encode users' eye movements. To open the shutter without visual input, PrivacEye detects changes in users' eye movements alone to gauge changes in the \"privacy level\" of the current situation. We evaluate our method on a first-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity, and show that our method is effective in preserving privacy in this challenging setting.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126815548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","authors":"","doi":"10.1145/3314111","DOIUrl":"https://doi.org/10.1145/3314111","url":null,"abstract":"","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132020625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}