{"title":"An evaluation of spatial presence, social presence, and interactions with various 3D displays","authors":"D. Thalmann, Jun Lee, N. Magnenat-Thalmann","doi":"10.1145/2915926.2915954","DOIUrl":"https://doi.org/10.1145/2915926.2915954","url":null,"abstract":"This paper presents an immersive volleyball game, where a player plays not only against virtual opponents but also with support on his/her side of virtual teammates. This volleyball game has been implemented for several 3D displays such as a stereoscopic display, an autostereoscopic display, Oculus Rift glasses, and a 320o Immersive Room. In this paper, we also propose a user study of the relations between virtual humans and the sense of presence in the different 3D displays. We particularly study how surrounding virtual humans affect the sense of presence. Results show that users more significantly perceived spatial presence of virtual environment and social presence of virtual humans with the Oculus Rift and the Immersive Room.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123105594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Casas, Andrew W. Feng, O. Alexander, Graham Fyffe, P. Debevec, Ryosuke Ichikari, Hao Li, Kyle Olszewski, Evan A. Suma, Ari Shapiro
{"title":"Rapid Photorealistic Blendshape Modeling from RGB-D Sensors","authors":"D. Casas, Andrew W. Feng, O. Alexander, Graham Fyffe, P. Debevec, Ryosuke Ichikari, Hao Li, Kyle Olszewski, Evan A. Suma, Ari Shapiro","doi":"10.1145/2915926.2915936","DOIUrl":"https://doi.org/10.1145/2915926.2915936","url":null,"abstract":"Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no artistic expertise to operate, takes 15 seconds to capture and generate a single facial expression, and approximately 1 minute of processing time per expression to transform it into a blendshape model. Our main contributions include a complete end-to-end pipeline for capturing and generating photorealistic blendshape models automatically and a registration method that solves dense correspondences between two face scans by utilizing facial landmarks detection and optical flows. We demonstrate the effectiveness of the proposed method by capturing different human subjects with a variety of sensors and puppeteering their 3D faces with real-time facial performance retargeting. The rapid nature of our method allows for just-in-time construction of a digital face. To that end, we also integrated our pipeline with a virtual reality facial performance capture system that allows dynamic embodiment of the generated faces despite partial occlusion of the user's real face by the head-mounted display.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125694765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Spatial and Temporal Coherence to Strengthen Seam Carving in Video Retargeting","authors":"D. Liu, Chi-Wei Huang","doi":"10.1145/2915926.2915953","DOIUrl":"https://doi.org/10.1145/2915926.2915953","url":null,"abstract":"In recent years, many content-aware retargeting techniques have been proposed. Among them, seam carving is a novel and efficient method, but it may distort the object's structure. For enlarging an image, we tend to make it larger and undistorted by first magnifying the image, and shrink it to the target size using seam carving. Thus, in this paper, we focus on shrinking. For spatial coherence, we emphasize the object shape and preserve significant content. We also combine seam carving and scaling operator, trying to avoid the bad results due to content distortion. Moreover, we extend our method to video retargeting, and classify the videos into those taken by the static camera setup and the others by the moving camera setup. Then we explore temporal coherence to decrease the jittery artifacts. Finally, the experimental results demonstrate our approach can raise the quality in video retargeting.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116937048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Expressive End-Effector Trajectories to Expressive Bodily Motions","authors":"Pamela Carreno-Medrano, S. Gibet, P. Marteau","doi":"10.1145/2915926.2915941","DOIUrl":"https://doi.org/10.1145/2915926.2915941","url":null,"abstract":"Recent results in the affective computing sciences point towards the importance of virtual characters capable of conveying affect through their movements. However, in spite of all advances made on the synthesis of expressive motions, almost all of the existing approaches focus on the translation of stylistic content rather than on the generation of new expressive motions. Based on studies that show the importance of end-effector trajectories in the perception and recognition of affect, this paper proposes a new approach for the automatic generation of affective motions. In this approach, expressive content is embedded in a low-dimensional manifold built from the observation of end-effector trajectories. These trajectories are taken from an expressive motion capture database. Body motions are then reconstructed by a multi-chain Inverse Kinematics controller. The similarity between the expressive content of MoCap and synthesized motions is quantitatively assessed through information theory measures.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131378727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juzheng Zhang, N. Magnenat-Thalmann, Jianmin Zheng
{"title":"Combining Memory and Emotion With Dialog on Social Companion: A Review","authors":"Juzheng Zhang, N. Magnenat-Thalmann, Jianmin Zheng","doi":"10.1145/2915926.2915952","DOIUrl":"https://doi.org/10.1145/2915926.2915952","url":null,"abstract":"In the coming era of social companions, many researches have been pursuing natural dialog interactions and long-term relations between social companions and users. With respect to the quick decrease of user interests after the first few interactions, various emotion and memory models are developed and integrated with social companions for better user engagement. This paper reviews related works in the effort of combining memory and emotion with natural language dialog on social companions. We separate these works into three categories: (1) Affective system with dialog, (2) Task-driven memory with dialog, (3) Chat-driven memory with dialog. In addition, we discussed limitations and challenging issues to be solved. Finally, we also introduced our framework of social companions.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123355267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Boulic, Utku Evci, E. Molla, Phanindra Pisupati
{"title":"One Step from the Locomotion to the Stepping Pattern","authors":"R. Boulic, Utku Evci, E. Molla, Phanindra Pisupati","doi":"10.1145/2915926.2915949","DOIUrl":"https://doi.org/10.1145/2915926.2915949","url":null,"abstract":"The locomotion pattern is characterized by a translation displacement mostly occurring along the forward frontal body direction, whereas local repositioning with large re-orientations, i.e. stepping, may induce translations both along the frontal and the lateral body directions (holonomy). We consider here a stepping pattern with initial and final null speeds within a radius of 40% of the body height and re-orientation up to 180°. We propose a robust step detection method for such a context and identify a consistent intra-subject behavior in terms of the choice of starting foot and the number of steps.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128842202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation of Small Social Group Behaviors in Emergency Evacuation","authors":"Ruilin Xie, Zhicheng Yang, Yunshi Niu, Yanci Zhang","doi":"10.1145/2915926.2919325","DOIUrl":"https://doi.org/10.1145/2915926.2919325","url":null,"abstract":"In this paper, we present a novel method to simulate the influences of small social group on pedestrian's behaviors under emergency situations. Our method is built on an important observation that the relationships between group members are usually different and even mutual relationships between group members might be asymmetric. Two phenomena can be produced by our method based on this observation. The first is group aggregation phenomenon which means that pedestrians tend to stay closer to their socially close group members than socially distant members. The second phenomenon is the complicated process of searching for lost members which is modeled as a cost-based function in our method. This function will guide pedestrians to make many decisions like whether to look for the lost members, who will be searched for, who will conduct the search as well as when to abort the search. The experimental results show that our method can produce very real social group behaviors under emergency situations.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116684120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for Manipulating Multi-Perspective Image Using A Parametric Surface","authors":"Katsutsugu Matsuyama, K. Konno","doi":"10.1145/2915926.2915946","DOIUrl":"https://doi.org/10.1145/2915926.2915946","url":null,"abstract":"We have designed a framework for manipulating multi-perspective images. In this paper, we present (1) an accelerated multi-perspective rendering method, (2) a parametric surface based multi-perspective camera control method and (3) an interface to manipulate multi-perspective images. Our camera control method can express transition of camera parameters by deforming the parametric surface and manipulating control points. Our rendering method performs about 2.5 times faster than the previous method at best. We also show two application examples utilizing our methods.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126982289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hybrid Sound Model for 3D Audio Games with Real Walking","authors":"Iana Podkosova, M. Urbanek, H. Kaufmann","doi":"10.1145/2915926.2915948","DOIUrl":"https://doi.org/10.1145/2915926.2915948","url":null,"abstract":"Spatialized audio is the only output that players receive in audio games. In order to provide a realistic view of the environment, it has to be of superior quality in terms of immersion and realism. Complex sound models can be used to generate realistic sound effects, including reflections and reverb. An implementation of a hybrid sound model based on the ODEON approach is introduced and adapted for real-time sound calculations. This model is evaluated and compared to a baseline model usually used in audio games in a user study in a virtual reality environment. The results show that the implemented hybrid model allows players to adjust to the game faster and provides them more support in avoiding virtual obstacles in simple room geometries than the baseline model.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129625266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. D. Loor, R. Richard, J. Soler, Elisabetta Bevacqua
{"title":"Aliveness metaphor for an evolutive gesture interaction based on coupling between a human and a virtual agent","authors":"P. D. Loor, R. Richard, J. Soler, Elisabetta Bevacqua","doi":"10.1145/2915926.2915932","DOIUrl":"https://doi.org/10.1145/2915926.2915932","url":null,"abstract":"This paper presents a model that provides adaptive and evolutive interaction between a human and a virtual agent. After introducing the theoretical justifications, the aliveness metaphor and the notion of coupling are presented. Then, we propose a formalization of the model that relies on the temporal evolution of the coupling between participants and the existence of phases during the interaction. An example on a fitness exergame is provided and some illustrations show the behavior of the model during an interaction. A video complements this example.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133771616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}