H. Otaka, Ai Nieda, Naruhito Toyoda, Megumi Tasaki, Ryo Takatama, D. Kuwabara, Masashi Sakamoto
{"title":"CG aided makeup design to understand and manipulate the impression of facial look and attractiveness","authors":"H. Otaka, Ai Nieda, Naruhito Toyoda, Megumi Tasaki, Ryo Takatama, D. Kuwabara, Masashi Sakamoto","doi":"10.1145/2804408.2814181","DOIUrl":"https://doi.org/10.1145/2804408.2814181","url":null,"abstract":"Facial color and texture make the impressions of facial look and attractiveness (e.g. gorgeous, sophisticated and warm-hearted). These impressions can be affected by facial makeups, including face foundation, lip-makeup, eye-makeup, eyebrow-makeup, and cheek-makeup. Face Foundation changes facial skin textures and adjusts facial skin tones. Lip-makeup changes lip colors and textures. However, it is difficult to figure out the detail of makeup impression clearly, because the meaning of language using in the questionnaire depends on the customer's culture, lifestyle, or country. In addition, the questionnaire cannot measure the elements such as color, radiance and the shapes though these elements have an influence on makeup preference. Therefore, in our previous study, we developed the eyelash makeup design system by using computer graphics for quantitative interpretation of the makeup impression. However, it is not well understood which types of color and texture in specific face parts correspond to each impression of face attractiveness. We aim to understand the corresponding facial impressions and manipulate them as you like, by makeup. .In the present study, using MAYA, we first create a CG image of average face shape as an original image. We next manipulate the original image to create 9 images with various combinations of makeups, including foundation, lip-makeup, eye-makeup, eyebrow, and cheek; each of 9 images is intended to make one specific impression. We evaluate whether these images' actual visual impressions on people correspond to our intended impressions of attractiveness.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128231227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah H. Creem-Regehr, Jeanine K. Stefanucci, W. Thompson, N. Nash, Michael McCardell
{"title":"Egocentric distance perception in the Oculus Rift (DK2)","authors":"Sarah H. Creem-Regehr, Jeanine K. Stefanucci, W. Thompson, N. Nash, Michael McCardell","doi":"10.1145/2804408.2804422","DOIUrl":"https://doi.org/10.1145/2804408.2804422","url":null,"abstract":"Perceiving an accurate sense of absolute scale is important for the utility of virtual environments (VEs). Research shows that absolute egocentric distances are underestimated in VEs compared to the same judgments made in the real world, but there are inconsistencies in the amount of underestimation. We examined two possible factors in the variation in the magnitude of distance underestimation. We compared egocentric distance judgments in a high-cost (NVIS SX60) and low-cost (Oculus Rift DK2) HMD using both indoor and outdoor highly-realistic virtual models. Performance more accurately matched the intended distance in the Oculus compared to the NVIS, and regardless of the HMD, distances were underestimated more in the outdoor versus the indoor VE. These results suggest promise in future use of consumer-level wide field-of-view HMDs for space perception research and applications, and the importance of considering the context of the environment as a factor in the perception of absolute scale within VEs.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129379316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Uncanny valley with the implicit association test","authors":"Katja Zibrek, R. Mcdonnell","doi":"10.1145/2804408.2814179","DOIUrl":"https://doi.org/10.1145/2804408.2814179","url":null,"abstract":"Despite the elusive term \"Uncanny Valley\", research in the area of appealing virtual humans approaching realism continues. The theory suggests that characters lose appeal when they approach photorealism (e.g., [MacDorman et al. 2009]). Realistic virtual characters are judged harshly, since the human visual system has acquired more expertise with the featural restrictions of other humans than with the restrictions of artificial characters [Seyama and Nagayama 2007]. Stylisation (making the character's appearance abstract) is therefore often used to avoid virtual characters to be perceived as unpleasant. We designed an experiment to test if there is a general affinity towards abstract as oppose to realistic characters.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth-based subtle gaze guidance in virtual reality environments","authors":"S. Sridharan, James Pieszala, Reynold J. Bailey","doi":"10.1145/2804408.2814187","DOIUrl":"https://doi.org/10.1145/2804408.2814187","url":null,"abstract":"Virtual reality headsets and immersive head-mounted displays have become commonplace and have found their applications in digital gaming, film and education. An immersive perception is created by surrounding the user of the VR system with photo-realisitic scenes, sound or other stimuli (e.g. haptic) that provide an engrossing experience to the viewer. The ability to interact with the objects in the virtual environment have added greater interest for its use in learning and education. In this proposed work we plan to explore the ability to subtly guide viewers' attention to important regions in a controlled 3D virtual scene. Subtle gaze guidance [Bailey et al. 2009] approach combines eye-tracking and subtle imagespace modulations to guide viewer's attention about a scene. These modulations are terminated before the viewer can fixate on them using their high acuity foveal vision. This approach is preferred over other overt techniques that make permanent changes to the scene being viewed. This approach has also been tested in controlled realworld environments [Booth et al. 2013]. The key challenge to such a system, is the need for an external projector to present modulations on the scene objects to guide viewer's attention. However a VR system enables the user to view and interact in a 3D scene that is close to reality, thereby allowing researchers to digitally manipulate the 3D scene for active gaze guidance.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115133419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effect of avatar model in stepping off a ledge in an immersive virtual environment","authors":"Bobby Bodenheimer, Qiang Fu","doi":"10.1145/2804408.2804426","DOIUrl":"https://doi.org/10.1145/2804408.2804426","url":null,"abstract":"Animated digital self-representations of the user in an immersive virtual environment, a self-avatar, have been shown to aid in perceptual judgments in the virtual environment and to provide critical information for people deciding what actions they can and cannot take. In this paper we explore whether the form of the self-avatar is important in providing this information. In particular, we vary the form of a self-avatar between having no self-avatar, a simple line-based skeleton avatar, or a full-body, gender-matched self-avatar and examine whether the form of the self-avatar affects peoples judgments in whether they could or could not step off of a virtual ledge. Our results replicate prior work that shows that having a self-avatar provides critical information for this judgment, but finds no difference in the form of the self-avatar having an effect on the judgment.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Martín, Julian Iseringhausen, Michael Weinmann, M. Hullin
{"title":"Multimodal perception of material properties","authors":"Rodrigo Martín, Julian Iseringhausen, Michael Weinmann, M. Hullin","doi":"10.1145/2804408.2804420","DOIUrl":"https://doi.org/10.1145/2804408.2804420","url":null,"abstract":"The human ability to perceive materials and their properties is a very intricate multisensory skill and as such not only an intriguing research subject, but also an immense challenge when creating realistic virtual presentations of materials. In this paper, our goal is to learn about how the visual and auditory channels contribute to our perception of characteristic material parameters. At the center of our work are two psychophysical experiments performed on tablet computers, where the subjects rated a set of perceptual material qualities under different stimuli. The first experiment covers a full collection of materials in different presentations (visual, auditory and audio-visual). As a point of reference, subjects also performed all ratings on physical material samples. A key result of this experiment is that auditory cues strongly benefit the perception of certain qualities that are of a tactile nature (like \"hard--soft\", \"rough--smooth\"). The follow-up experiment demonstrates that, to a certain extent, audio cues can also be transferred to other materials, exaggerating or attenuating some of their perceived qualities. From these results, we conclude that a multimodal approach, and in particular the inclusion of sound, can greatly enhance the digital communication of material properties.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130345227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effects of minification and display field of view on distance judgments in real and HMD-based environments","authors":"Bochao Li, Ruimin Zhang, A. Nordman, S. Kuhl","doi":"10.1145/2804408.2804427","DOIUrl":"https://doi.org/10.1145/2804408.2804427","url":null,"abstract":"Distance perception is important for many virtual reality applications, and numerous studies have found underestimated egocentric distances in head-mounted display (HMD) based virtual environments. Applying minification to imagery displayed in HMDs is a method that can reduce or eliminate the underestimation [Kuhl et al. 2009; Zhang et al. 2012]. In a previous study, we measured distance judgments with direct blind walking through an Oculus Rift DK1 HMD and found that participants judged distance accurately in a calibrated condition, and minification caused subjects to overestimate distances [Li et al. 2014]. This article describes two experiments built on the previous study to examine distance judgments and minification with the Oculus Rift DK2 HMD (Experiment 1), and in the real world with a simulated HMD (Experiment 2). From the results, we found statistically significant distance underestimation with the DK2, but the judgments were more accurate than results typically reported in HMD studies. In addition, we discovered that participants made similar distance judgments with the DK2 and the simulated HMD. Finally, we found for the first time that minification had a similar impact on distance judgments in both virtual and real-world environments.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"4-D spatial perception established through hypercube recognition tasks using interactive visualization system with 3-D screen","authors":"Takanobu Miwa, Yukihito Sakai, S. Hashimoto","doi":"10.1145/2804408.2804417","DOIUrl":"https://doi.org/10.1145/2804408.2804417","url":null,"abstract":"We have developed an interactive 4-D visualization system that employed the principal vanishing points operation as a method to control the movement of the eye-point and the change in the viewing direction in 4-D space. Different from conventional 4-D visualization and interaction techniques, the system can provide intuitive observation of 4-D space and objects by projecting them onto 3D space in real time from various positions and directions in 4-D space. Our next challenge is to examine whether humans are able to develop a spatial perception of 4-D space and objects through 4-D experiences provided by the system. In this paper, as the first step toward our aim, we assessed whether participants were able to get intuitive spatial understanding of 4-D objects. In the evaluation experiment, firstly, the participants learned a structure of a hypercube. Then, we evaluated their spatial perception developed in the learning period by tasks of controlling the 4-D eye-point and reconstructing the hypercube from a set of its 3-D projection drawings. The results indicated evidence for that humans were able to get 4-D spatial perception by operating the system.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"127 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132802472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","authors":"","doi":"10.1145/2804408","DOIUrl":"https://doi.org/10.1145/2804408","url":null,"abstract":"","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129388292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}