{"title":"Haptic mirror for active exploration of facial expressions by individuals who are blind","authors":"S. Yasmin, T. McDaniel, S. Panchanathan","doi":"10.1145/2628257.2628356","DOIUrl":"https://doi.org/10.1145/2628257.2628356","url":null,"abstract":"Communicating different emotions to individuals who are blind through tactile feedback is an active area of research. But most work is static in nature as different facial expressions of emotions are conveyed through a fixed set of facial features which may have meaning only to those who previously had sight. To individuals who are congenitally blind, these fixed sets of information are abstract, and little research reflects how this population can properly interpret and relate these fixed sets of signs as per their own nonvisual experience. Our goal is to develop a complete system that integrates feature extraction with haptic recognition. As emotion detection through image and video analysis often fails, we give emphasis on active exploration of facial expressions of one's self so that the movement of facial features and expressions becomes meaningful to users toward becoming proficient at interpreting facial expressions related to different emotions. We propose a dynamic haptic environment where an individual who is blind can perceive the reflection of his own facial movements to better understand and explore different facial expressions on the basis of the movement of his own facial features.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124241933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of congruent facial and haptic expressions of emotions","authors":"Yoren Gaffary, Jean-Claude Martin, M. Ammi","doi":"10.1145/2628257.2628349","DOIUrl":"https://doi.org/10.1145/2628257.2628349","url":null,"abstract":"Haptic expression of emotions has received less attention than other modalities. Bonnet et al. [2011] combine visio-haptic modalities to improve the recognition and discrimination of some emotions. However, few works investigated how these modalities complement each other. For instance, Bickmore et al. [2010] highlight some non-significant tendencies of complementarity between the visual and haptic modalities.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121479503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"'Explorances' or why (some) physical entities help us be more creative","authors":"Sowmya Somanath, E. Sharlin, M. Sousa","doi":"10.1145/2628257.2628351","DOIUrl":"https://doi.org/10.1145/2628257.2628351","url":null,"abstract":"We believe that every physical entity has a set of attributes that defines the degree of how creatively it can be used. For example, the affordances, abstractness and modular nature of Lego™ blocks allows them to take on different forms of expression that showcases varying levels of human creativity (e.g. building alphabets, creating homes etc.). Similarly, when a DIY designer uses bottles to build houses, it projects his creative skills, but at the same time it speaks about the materiality, affordances and embodiment of the bottle which lends itself readily to creative and novel interaction design efforts. This theory, that every entity has a set of attributes that allows them to lend themselves more or less readily to creative and novel interactive design explorations is what we call explorances and is the focus of our proposed work.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126791929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Re-evaluating benefits of body-based rotational cues for maintaining orientation in virtual environments: men benefit from real rotations, women don't","authors":"Timofey Grechkin, B. Riecke","doi":"10.1145/2628257.2628275","DOIUrl":"https://doi.org/10.1145/2628257.2628275","url":null,"abstract":"Relying exclusively on visual information to maintain orientation while traveling in virtual environments is challenging. However, it is currently unclear how much body-based information is required to produce a significant improvement in navigation performance. In our study participants explored unfamiliar virtual mazes using visual-only and physical rotations. Participants's ability to remain oriented was measured using a novel pointing task. While men consistently benefitted from using physical rotations versus visual-only rotations (lower absolute pointing errors, configuration errors, and absolute ego-orientation errors), women did not. We discuss design implications for locomotion interfaces in virtual environments. Our findings also suggest that investigating individual differences may help to resolve apparent conflicts in the literature regarding potential benefits of physical rotational cues for effective spatial orientation.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126989700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Calibration and depth matching accuracy with a table-mounted augmented reality haploscope","authors":"Chunya Hua, S. Ellis, J. Swan","doi":"10.1145/2628257.2628354","DOIUrl":"https://doi.org/10.1145/2628257.2628354","url":null,"abstract":"In many medical Augmented Reality (AR) applications, doctors want the ability to place a physical object, such as a needle or other medical device, at the depth indicated by a virtual object. Often, this object will be located behind an occluding surface, such as the patient's skin. In this poster we describe efforts to determine how accurately this can be done. In particular, we used a table-mounted AR haploscope to conduct two depth-matching experiments. Figure 1 shows our AR haploscope, which was originally designed and built by Singh [2013]. We experimented with different methods of calibrating the haploscope, with the goal of being able to place virtual AR target objects in space that can be depth-matched with an accuracy that is as close as possible to physical test targets. We eventually developed a calibration method that uses two laser levels to generate vertical fans of light (Figure 1). We shoot these light fans through the AR haploscope optics, where it bounces off of the optical combiners and onto the image generators. We first set the fans parallel, in order to properly model an observer's inter-pupillary distance (IPD). Next, we cant the fans inwards, in order to model different vergence distances. We validated this calibration with two AR depth-matching experiments. These experiments measured the effect of an occluding surface, and examined near-field reaching space distances of 38 to 48 cm. Experiment I replicated a similar experiment reported by Edwards et al [2004], and involved 10 observers in a within-subjects design. Figure 2 shows the results. Errors ranged from -5 to +3 mm when the occluder was present, -4 to +2 mm when the occluder was absent, and observers sometimes judged the virtual object to be closer to themselves after the presentation of the occluder. We can model the strong linear effect shown in Figure 2 by considering how the observers' IPD changes as they converge to different distances. Experiment II replicated Experiment I with three experienced psychophysical observers and additional replications. The results showed significant individual differences between the observers, on the order of 8 mm, and the individual results did not follow the averaged results from Experiment I. Overall, these experiments suggest that IPD needs to be accurately modeled at each depth, and the change in IPD with vergence needs to be tracked. Our results suggest improved calibration methods, which we will validate with additional experiments. In addition, these experiments used a highly salient occluding surface, and we also intend to study the effect of occluder salience.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Young, G. B. Gaylor, Scott M. Andrus, Bobby Bodenheimer
{"title":"A comparison of two cost-differentiated virtual reality systems for perception and action tasks","authors":"M. Young, G. B. Gaylor, Scott M. Andrus, Bobby Bodenheimer","doi":"10.1145/2628257.2628261","DOIUrl":"https://doi.org/10.1145/2628257.2628261","url":null,"abstract":"Recent advances in technology and the opportunity to obtain commodity-level components have made the development and use of three-dimensional virtual environments more available than ever before. How well such components work to generate realistic virtual environments, particularly environments suitable for perception and action studies, is an open question. In this paper we compare two virtual reality systems in a variety of tasks: distance estimation, virtual object interaction, a complex search task, and a simple viewing experiment. The virtual reality systems center around two different head-mounted displays, a low-cost Oculus Rift and a high-cost Nvis SX60, which differ in resolution, field-of-view, and inertial properties, among other factors. We measure outcomes of the individual tasks as well as assessing simulator sickness and presence. We find that the low-cost system consistently outperforms the high-cost system, but there is some qualitative evidence that some people are more subject to simulator sickness in the low-cost system.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126403849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An affective movie rating system","authors":"A. Rajenderan, S. Sridharan, Reynold J. Bailey","doi":"10.1145/2628257.2628348","DOIUrl":"https://doi.org/10.1145/2628257.2628348","url":null,"abstract":"Modern media recommendation systems work based on the content of videos previously seen. While this may work for the habitual viewer, it is not always appropriate for many users whose tastes change based on their moods. A lot can be gathered from a person's facial expression while they are engaged in an activity, for example by observing someone as they view a film, we can likely tell if they enjoyed it or not. While a content providing company may not have the resources to employ human observers to watch audiences (or the inclination to do so, because it is not a very appealing idea), an automatic system that does this would be more feasible. Photoplethysmography is a field in which a person's physiological details can be gathered optically without requiring any physical contact with that person. By monitoring the intensity of light on a patch of the viewer's skin, we can accurately estimate their heart rate [Poh et al. 2011]. Our system combines heart rate measurement and facial expressions to quantify how much a user enjoys a video.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Occlusion aware digital colouring for comics","authors":"Matthew Thorne, C. Kaplan","doi":"10.1145/2628257.2628353","DOIUrl":"https://doi.org/10.1145/2628257.2628353","url":null,"abstract":"A common workflow for modern comic artists is to create an inked line drawing using traditional tools. Then the drawing is scanned as black and white line art and coloured digitally using an application such as Photoshop [Abel and Madden 2012]. The first step of digital colouring, called flatting, assigns symbolic colours to each region in an image which a colourist can use to apply final colours and other effects. There is little direct support for flatting in software, which results in a manual, labour intensive process for artists. Further, an object in a drawing may be split into multiple regions due to occlusions, requiring an artist to assign the same colour to each region of the object. This work describes a quantitative framework that allows occlusion cues, derived from vision research, to be computed and compared with each other. This framework is used to simplify comic flatting, allowing an artist to flat multiple regions in a single click. Simple gestural tools that can be used to add additional guidance to occlusion processing are also provided.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129604769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Player perception of delays and jitter in character responsiveness","authors":"Aline Normoyle, Gina Guerrero, S. Jörg","doi":"10.1145/2628257.2628263","DOIUrl":"https://doi.org/10.1145/2628257.2628263","url":null,"abstract":"Response lag in digital games is known to negatively affect a player's game experience. Particularly with networked multiplayer games, where lag is typically unavoidable, the impact of delays needs to be well understood so that its effects can be mitigated. In this paper, we investigate two aspects of lag independently: latency (constant delay) and jitter (varying delay). We evaluate how latency and jitter each affect a player's enjoyment, frustration, performance, and experience as well as the extent to which players can adjust to such delays after a few minutes of gameplay. We focus on a platform game where the player controls a virtual character through a world. We find that delays up to 300ms do not impact the players' experience as long as they are constant. When jitter was added to a delay of 200ms, however, the lag was noticed by participants more often, hindered players' ability to improve with practice, increased how often they failed to reach the goal of the game, and reduced the perceived motion quality of the character.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125718628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. R. Carretero, A. Qureshi, Christopher E. Peters
{"title":"Evaluating the perception of group emotion from full body movements in the context of virtual crowds","authors":"M. R. Carretero, A. Qureshi, Christopher E. Peters","doi":"10.1145/2628257.2628266","DOIUrl":"https://doi.org/10.1145/2628257.2628266","url":null,"abstract":"Simulating the behavior of crowds of artificial entities that have humanoid embodiments has become an important element in computer graphics and special effects. However, many important questions remain in relation to the perception of social behavior and expression of emotions in virtual crowds. Specifically, few studies have considered the role of background context on the perception of the full-body emotion expressed by sub-constituents of the crowd i.e. individuals and small groups. In this paper, we present the results of perceptual studies in which animated scenes of expressive virtual crowd behavior were rated in terms of their valence by participants. The behaviors of a task-irrelevant crowd in the background were altered between neutral, happy and sad in order to investigate effects on the perception of emotion from task-relevant individuals in the foreground. Effects of the task irrelevant background on ratings of foreground characters were found, including cases that accompanied negatively valenced stimuli.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115370438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}