{"title":"Perception of drowsiness based on correlation with facial image features","authors":"Yugo Sato, Takuya Kato, N. Nozawa, S. Morishima","doi":"10.1145/2931002.2947705","DOIUrl":"https://doi.org/10.1145/2931002.2947705","url":null,"abstract":"This paper presents a video-based method for detecting drowsiness. Generally, human beings can perceive their fatigue and drowsiness through looking at faces. The ability to perceive the fatigue and the drowsiness has been studied in many ways. The drowsiness detection method based on facial videos has been proposed [Nakamura et al. 2014]. In their method, a set of the facial features calculated with the Computer Vision techniques and the k-nearest neighbor algorithm are applied to classify drowsiness degree. However, the facial features that are ineffective against reproducing the perception of human beings with the machine learning method are not removed. This factor can decrease the detection accuracy.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114833515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ware, Daniel Bolan, Ricky Miller, D. Rogers, J. Ahrens
{"title":"Animated versus static views of steady flow patterns","authors":"C. Ware, Daniel Bolan, Ricky Miller, D. Rogers, J. Ahrens","doi":"10.1145/2931002.2931012","DOIUrl":"https://doi.org/10.1145/2931002.2931012","url":null,"abstract":"Two experiments were conducted to test the hypothesis that animated representations of vector fields are more effective than common static representations even for steady flow. We compared four flow visualization methods: animated streamlets, animated orthogonal line segments (where short lines were elongated orthogonal to the flow direction but animated in the direction of flow), static equally spaced streamlines, and static arrow grids. The first experiment involved a pattern detection task in which the participant searched for an anomalous flow pattern in a field of similar patterns. The results showed that both the animation methods produced more accurate and faster responses. The second experiment involved mentally tracing an advection path from a central dot in the flow field and marking where the path would cross the boundary of a surrounding circle. For this task the animated streamlets resulted in better performance than the other methods, but the animated orthogonal particles resulted in the worst performance. We conclude with recommendations for the representation of steady flow patterns.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115124303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian
{"title":"Evaluating human gaze patterns during grasping tasks: robot versus human hand","authors":"Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian","doi":"10.1145/2931002.2931007","DOIUrl":"https://doi.org/10.1145/2931002.2931007","url":null,"abstract":"Perception and gaze are an integral part of determining where and how to grasp an object. In this study we analyze how gaze patterns differ when participants are asked to manipulate a robotic hand to perform a grasping task when compared with using their own. We have three findings. First, while gaze patterns for the object are similar in both conditions, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. Second, We provide evidence that for complex objects (eg, a toy airplane) participants essentially treated the object as a collection of sub-objects. Third, we performed a follow-up study that shows that choosing camera angles that clearly display the features participants spend time gazing at are more effective for determining the effectiveness of a grasp from images. Our findings are relevant both for automated algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127144412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of lighting and shading for animated virtual characters","authors":"Pisut Wisessing, J. Dingliana, R. Mcdonnell","doi":"10.1145/2931002.2931015","DOIUrl":"https://doi.org/10.1145/2931002.2931015","url":null,"abstract":"The design of lighting in Computer Graphics is directly derived from cinematography, and many digital artists follow the conventional wisdom on how lighting is set up to convey drama, appeal, or emotion. In this paper, we are interested in investigating the most commonly used lighting techniques to more formally determine their effect on our perception of animated virtual characters. Firstly, we commissioned a professional animator to create a sequence of dramatic emotional sentences for a typical CG cartoon character. Then, we rendered that character using a range of lighting directions, intensities, and shading techniques. Participants of our experiment rated the emotion, the intensity of the performance, and the appeal of the character. Our results provide new insights into how animated virtual characters are perceived, when viewed under different lighting conditions.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effects of artificially reduced field of view and peripheral frame stimulation on distance judgments in HMDs","authors":"Bochao Li, A. Nordman, James W. Walker, S. Kuhl","doi":"10.1145/2931002.2931013","DOIUrl":"https://doi.org/10.1145/2931002.2931013","url":null,"abstract":"Numerous studies have reported underestimated egocentric distances in virtual environments through head-mounted displays (HMDs). However, it has been found that distance judgments made through Oculus Rift HMDs are much less compressed, and their relatively high device field of view (FOV) may play an important role. Some studies showed that applying constant white light in viewers' peripheral vision improved their distance judgments through HMDs. In this study, we examine the effects of the device FOV and the peripheral vision by performing a blind walking experiment through an Oculus Rift DK2 HMD with three different conditions. For the BlackFrame condition, we rendered a rectangular black frame to reduce the device field of view of the DK2 HMD to match an NVIS nVisor ST60 HMD. In the WhiteFrame and GreyFrame conditions, we changed the frame color to solid white and middle grey. From the results, we found that the distance judgments made through the black frame were significantly underestimated relative to the WhiteFrame condition. However, no significant differences were observed between the WhiteFrame and GreyFrame conditions. This result provides evidence that the device FOV and peripheral light could influence distance judgments in HMDs, and the degree of influence might not change proportionally with respect to the peripheral light brightness.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing gaze synchrony in cinema: a pilot study","authors":"K. Breeden, P. Hanrahan","doi":"10.1145/2931002.2947704","DOIUrl":"https://doi.org/10.1145/2931002.2947704","url":null,"abstract":"Recent advances in personalized displays now allow for the delivery of high-fidelity content only to the most sensitive regions of the visual field, a process referred to as foveation [Guenter et al. 2012]. Because foveated systems require accurate knowledge of gaze location, attentional synchrony is particularly relevant: this is observed when multiple viewers attend to the same image region concurrently.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg
{"title":"An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments","authors":"Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg","doi":"10.1145/2931002.2963135","DOIUrl":"https://doi.org/10.1145/2931002.2963135","url":null,"abstract":"","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124099236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pallavi Raiturkar, A. Kleinsmith, A. Keil, Arunava Banerjee, Eakta Jain
{"title":"Decoupling light reflex from pupillary dilation to measure emotional arousal in videos","authors":"Pallavi Raiturkar, A. Kleinsmith, A. Keil, Arunava Banerjee, Eakta Jain","doi":"10.1145/2931002.2931009","DOIUrl":"https://doi.org/10.1145/2931002.2931009","url":null,"abstract":"Predicting the exciting portions of a video is a widely relevant problem because of applications such as video summarization, searching for similar videos, and recommending videos to users. Researchers have proposed the use of physiological indices such as pupillary dilation as a measure of emotional arousal. The key problem with using the pupil to measure emotional arousal is accounting for pupillary response to brightness changes. We propose a linear model of pupillary light reflex to predict the pupil diameter of a viewer based only on incident light intensity. The residual between the measured pupillary diameter and the model prediction is attributed to the emotional arousal corresponding to that scene. We evaluate the effectiveness of this method of factoring out pupillary light reflex for the particular application of video summarization. The residual is converted into an exciting-ness score for each frame of a video. We show results on a variety of videos, and compare against ground truth as reported by three independent coders.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124396075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ishwarya Thirunarayanan, S. Koppal, J. Shea, Eakta Jain
{"title":"Leveraging gaze data for segmentation and effects on comics","authors":"Ishwarya Thirunarayanan, S. Koppal, J. Shea, Eakta Jain","doi":"10.1145/2931002.2947703","DOIUrl":"https://doi.org/10.1145/2931002.2947703","url":null,"abstract":"In this work, we present a semi-automatic method based on gaze data to identify the objects in comic images on which digital effects will look best. Our key contribution is a robust technique to cluster the noisy gaze data without having to specify the number of clusters as input. We also present an approach to segment the identified object of interest.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114619142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward
{"title":"Is the motion of a child perceivably different from the motion of an adult?","authors":"Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward","doi":"10.1145/2931002.2963133","DOIUrl":"https://doi.org/10.1145/2931002.2963133","url":null,"abstract":"","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"279 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114025602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}