Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401852
T. Schuchert, Sascha Voth, Judith Baumgarten
{"title":"Sensing visual attention using an interactive bidirectional HMD","authors":"T. Schuchert, Sascha Voth, Judith Baumgarten","doi":"10.1145/2401836.2401852","DOIUrl":"https://doi.org/10.1145/2401836.2401852","url":null,"abstract":"This paper presents a novel system for sensing of attentional behavior in Augmented Reality (AR) environments by analyzing eye movement. The system is based on light weight head mounted optical see-through glasses containing bidirectional microdisplays, which allow displaying images and eye tracking on a single chip. The sensing and interaction application has been developed in the European project ARtSENSE in order to (1) detect museum visitors attention/interest in artworks as well as in presented AR content, (2) present appropriate personalized information based on the detected attention as augmented overlays, and (3) allow museum visitors gaze-based interaction with the system or the AR content. In this paper we present a novel algorithm for pupil estimation in low resolution eye-tracking images and show first results on attention estimation by eye movement analysis and interaction with the system by gaze.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124731994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401846
R. Bednarik, Shahram Eivazi, Michal Hradiš
{"title":"Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement","authors":"R. Bednarik, Shahram Eivazi, Michal Hradiš","doi":"10.1145/2401836.2401846","DOIUrl":"https://doi.org/10.1145/2401836.2401846","url":null,"abstract":"When using a multiparty video mediated system, interacting participants assume a range of various roles and exhibit behaviors according to how engaged in the communication they are. In this paper we focus on estimation of conversational engagement from gaze signal. In particular, we present an annotation scheme for conversational engagement, a statistical analysis of gaze behavior across varying levels of engagement, and we classify vectors of computed eye tracking measures. The results show that in 74% of cases the level of engagement can be correctly classified into either high or low level. In addition, we describe the nuances of gaze during distinct levels of engagement.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127613150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401849
Tong Cha, S. Maier
{"title":"Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment","authors":"Tong Cha, S. Maier","doi":"10.1145/2401836.2401849","DOIUrl":"https://doi.org/10.1145/2401836.2401849","url":null,"abstract":"A special human-computer interaction (HCI) framework processing user input in a multi-display environment has the ability to detect and interpret dynamic hand gesture input. In an environment equipped with large displays, full contactless application control is possible with this system. This framework was extended with a new input modality that involves human gaze in the interaction. The main contribution of this work is the possibility to unite any types of computer input and obtain a detailed view on the behaviour of every modality. Information is then available in the form of high speed data samples received in real time. The framework is designed with a special regard to gaze and hand gesture input modality in multi-display environments with large-area screens.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114398091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401837
D. Khosla, Matthew S. Keegan, Lei Zhang, K. Martin, Darrel J. VanBuer, David J. Huber
{"title":"Brain-enhanced synergistic attention (BESA)","authors":"D. Khosla, Matthew S. Keegan, Lei Zhang, K. Martin, Darrel J. VanBuer, David J. Huber","doi":"10.1145/2401836.2401837","DOIUrl":"https://doi.org/10.1145/2401836.2401837","url":null,"abstract":"In this paper, we describe a hybrid human-machine system for searching and detecting Objects of Interest (OI) in imagery. Automated methods for OI detection based on models of human visual attention have received much interest, but are inherently bottom-up and driven by features. Humans fixate on regions of imagery based on a much stronger top-down component. While it may be possible to include some aspects of top-down cognition into these methods, it is difficult to fully capture all aspects of human cognition into an automated algorithm. Our hypothesis is that combination of automated methods with human fixations will provide a better solution than either alone. In this work, we describe a Brain-Enhanced Synergistic Attention (BESA) system that combines models of visual attention with real-time eye fixations from a human for accurate search and detections of OI. We describe two different BESA schemes and provide implementation details. Preliminary studies were conducted to determine the efficacy of the system and initial results are promising. Typical applications of this technology are in surveillance, reconnaissance and intelligence analysis.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132702174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401845
Kosuke Kabashima, Kristiina Jokinen, M. Nishida, Seiichi Yamamoto
{"title":"Multimodal corpus of conversations in mother tongue and second language by same interlocutors","authors":"Kosuke Kabashima, Kristiina Jokinen, M. Nishida, Seiichi Yamamoto","doi":"10.1145/2401836.2401845","DOIUrl":"https://doi.org/10.1145/2401836.2401845","url":null,"abstract":"We describe data on multi-modal information that were collected from conversations both in the mother tongue and the second language in this paper. We also compare eye movements and utterance styles between communications in the mother tongue and second language. The results we obtained from analyzing eye movements and utterance styles are presented.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130442424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401850
Zixuan Wang, Jinyun Yan, H. Aghajan
{"title":"A framework of personal assistant for computer users by analyzing video stream","authors":"Zixuan Wang, Jinyun Yan, H. Aghajan","doi":"10.1145/2401836.2401850","DOIUrl":"https://doi.org/10.1145/2401836.2401850","url":null,"abstract":"The engagement time on the computer is increasing steadily with the rapid development of the Internet. During the long period in front of the computer, bad postures and habits will result in some health risks, and the unawareness of fatigue will impair the work efficiency. We investigate how users behave in front of the computer with a camera. Face pose, eye gaze, eye blinking, and yawn frequency are considered. These visual cues are then used to give suggestions to users for correcting wrong posture and indicating the need for a break. We propose a novel framework of personal assistant for a user when he uses computer for a long time. The camera produces the video stream which records the user behavior. The automatically assistant system will analyze the visual inputs and give suggestions at the right time. Our experiment shows that it achieves high accuracy of detecting visual cues, and makes reasonable suggestions to users. The work initializes the area of assistant system for individuals who use computer frequently.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-26DOI: 10.1145/2401836.2401838
Christopher D. McMurrough, Jonathan Rich, C. Conly, V. Athitsos, F. Makedon
{"title":"Multi-modal object of interest detection using eye gaze and RGB-D cameras","authors":"Christopher D. McMurrough, Jonathan Rich, C. Conly, V. Athitsos, F. Makedon","doi":"10.1145/2401836.2401838","DOIUrl":"https://doi.org/10.1145/2401836.2401838","url":null,"abstract":"This paper presents a low-cost, wearable headset for mobile 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which are able to provide an estimate of the user gaze vector and its intersection with a 3D point in space. A computational approach that considers object 3D information and visual appearance together with the visual gaze interactions of the user is also given to demonstrate the utility of the device. The resulting system is able to identify, in real-time, known objects within a scene that intersect with the user gaze vector.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"543 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126079986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-In '12Pub Date : 2012-10-22DOI: 10.1145/2401836.2401841
Magdalena Rychlowska, Leah Zinner, Serban C. Musca, P. Niedenthal
{"title":"From the eye to the heart: eye contact triggers emotion simulation","authors":"Magdalena Rychlowska, Leah Zinner, Serban C. Musca, P. Niedenthal","doi":"10.1145/2401836.2401841","DOIUrl":"https://doi.org/10.1145/2401836.2401841","url":null,"abstract":"Smiles are complex facial expressions that carry multiple meanings. Recent literature suggests that deep processing of smiles via embodied simulation can be triggered by achieved eye contact. Three studies supported this prediction. In Study 1, participants rated the emotional impact of portraits, which varied in eye contact and smiling. Smiling portraits that achieved eye contact were more emotionally impactful than smiling portraits that did not achieve eye contact. In Study 2, participants saw photographs of smiles in which eye contact was manipulated. The same smile of the same individual caused more positive emotion and higher ratings of authenticity when eye contact was achieved than when it was not. In Study 3, participants' facial EMG was recorded. Activity over the zygomatic major (i.e. smile) muscle was greater when participants observed smiles that achieved eye contact compared to smiles that did not. These results support the role of eye contact as a trigger of embodied simulation. Implications for human-machine interactions are discussed.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124762794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}