{"title":"CHASE: character animation scripting environment","authors":"Christos Mousas, C. Anagnostopoulos","doi":"10.1145/2817675.2817677","DOIUrl":"https://doi.org/10.1145/2817675.2817677","url":null,"abstract":"This paper presents the three scripting commands and main functionalities of a novel character animation environment called CHASE. CHASE was developed for enabling inexperienced programmers, animators, artists, and students to animate in meaningful ways virtual reality characters. This is achieved by scripting simple commands within CHASE. The commands identified, which are associated with simple parameters, are responsible for generating a number of predefined motions and actions of a character. Hence, the virtual character is able to animate within a virtual environment and to interact with tasks located within it. An additional functionality of CHASE is supplied. It provides the ability to generate multiple tasks of a character, such as providing the user the ability to generate scenario-related animated sequences. However, since multiple characters may require simultaneous animation, the ability to script actions of different characters at the same time is also provided.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"IA-17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126556272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantitative intensity analysis of facial expressions using HMM and linear regression","authors":"Jing Wu, Shuangjiu Xiao","doi":"10.1145/2670473.2670501","DOIUrl":"https://doi.org/10.1145/2670473.2670501","url":null,"abstract":"In this paper, an automatic framework of facial expression analysis focusing on quantitative intensity illustration is proposed. Quantitative intensity variation could be extracted during the whole period of facial expressions, from neutral state to apex state. The logic behind this paper lies in the intensity differences of same prototype expression, and lies that these intensity differences could be illustrated by facial expression energy variation throughout expression. In order to unify video data with different frame numbers, Hidden Markov Models (HMMs) are applied to every video for classification and expression states generation. These expressions states extracted from each video showing same expression have the same length. Then given facial landmarks of key positions, energy value of each state could be demonstrated by placements of landmarks. By synthesizing states variation and energy value, intensity curves for each expression could be obtained using linear regression algorithm. In this work, we explore person-dependent and person-independent analysis of expressions, in person-dependent experiment quantitative intensity compare is tested for expression 'Happiness'.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129522940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual saliency based bag of phrases for image retrival","authors":"Lijuan Duan, Wei Ma, Jun Miao, Xuan Zhang","doi":"10.1145/2670473.2670510","DOIUrl":"https://doi.org/10.1145/2670473.2670510","url":null,"abstract":"This paper presents a saliency based bag-of-phrases (Saliency-BoP for short) method for image retrieval. It combines saliency detection with visual phrase construction to extract bag-of-phrase features. To achieve this, the method first detects salient regions in images. Then, it constructs visual phrases using the word pairs which are from the same salient regions. Finally, it extracts the bag of visual phrases from the first K salient regions to describe images. Experimental results on Corel 1K and Microsoft Research Cambridge image database demonstrated that the Saliency-BoP method outperforms related methods such as Bag-of-Words (BoW) or Saliency-BoW.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129567351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Jiang, Tianlu Mao, Shuangyuan Wu, Mingliang Xu, Zhaoqi Wang
{"title":"A local evaluation approach for multi-agent navigation in dynamic scenarios","authors":"Hao Jiang, Tianlu Mao, Shuangyuan Wu, Mingliang Xu, Zhaoqi Wang","doi":"10.1145/2670473.2670493","DOIUrl":"https://doi.org/10.1145/2670473.2670493","url":null,"abstract":"In recent years, the technology for crowd simulation has been applied in many fields. However, collision avoidance considering of multiple individuals and moving obstacles simultaneously is still a challenging task in this research area. In this paper, we present a novel technique for multi-agent navigation in dynamic scenario. By coupling unified representation of environment with a agent-based evaluation model, our method takes into account dynamic and static environment conditions simultaneously. Each individual make an estimation of the costs-to-moving and perform a balanced decision to react to multiple requests. Moreover, our agent-based evaluation approach provides similar operation for each agent. Therefore, we can make full use of the processing capacity of GPU with this parallel characteristic. The experimental results show that the algorithm can depict the interactions between virtual agents and dynamic environments. Also thousands of agents can be simulated in real-time.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"30 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A two-view VR shooting theater system","authors":"Hong-Da Yu, Huiyu Li, Wei-Si Sun, Wei Gai, Tingting Cui, Chu-Tian Wang, Dong-Dong Guan, Yi-Jun Yang, Chenglei Yang, W. Zeng","doi":"10.1145/2670473.2670497","DOIUrl":"https://doi.org/10.1145/2670473.2670497","url":null,"abstract":"In traditional shooting theater system, only one single scene image can be presented to all the players, and the interaction based on simulation guns is limited. In this paper, we describe a novel shooting theater system for freely moving players, which is equipped with two-view projecting system, individual surround-stereo earphone and user-customized simulation gun. To provide friendly user interaction, a new-designed strategy of simulation gun is introduced. In the end, a tennis game system is given to show the extensibility and practicability of our system.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A practical GPU accelerated surface reconstruction","authors":"Shuangcai Yang, Xubo Yang","doi":"10.1145/2670473.2670506","DOIUrl":"https://doi.org/10.1145/2670473.2670506","url":null,"abstract":"This paper presents a novel liquid surface reconstruction framework based on GPU acceleration. The method can fast reconstruct triangle mesh from large set of particle data. This algorithm builds parallel spatial hash table and AABB structure, which improves the efficiency of surface reconstruction. In order to fast compute triangle mesh from implicit function on GPU, we also modified the traditional Histogram Pyramid based Marching Cubes algorithm on GPU. We compared the effect and efficiency with some other surface reconstruction methods, which proves that the method can improve the reconstruction efficiency and still maintain the detail. This paper builds some scenes and generates scene animation, which proves this method can be used in many common liquid animation scenes.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116218666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arturo García, Sergio Murguia, Ulises Olivares, Félix F. Ramos
{"title":"Fast parallel construction of stack-less complete LBVH trees with efficient bit-trail traversal for ray tracing","authors":"Arturo García, Sergio Murguia, Ulises Olivares, Félix F. Ramos","doi":"10.1145/2670473.2670488","DOIUrl":"https://doi.org/10.1145/2670473.2670488","url":null,"abstract":"This paper presents an efficient space partitioning approach for building high quality Linear Bounding Volume Hierarchy (LBVH) acceleration structures for ray tracing. This method produces more regular axis-aligned bounding boxes (AABB) into a complete binary tree. This structure is fully parallelized on GPU and the process to efficiently parallelize the construction is reviewed in detail in order to guarantee the fastest build times. We also describe the traversal algorithm that is based on a stack-less bit trail method to accelerate the frame rates of rendering of 3D models in graphics processing units (GPU). We analyze diverse performance metrics such as build times, frame rates, memory footprint and average intersections by AABB and by primitive, and we compare the results with a middle split and SAH (surface area heuristic) splitting method where we show that our structure provides a good balance between fast building times and efficient ray traversal performance. This partitioning approach improves the ray traversal efficiency of rigid objects resulting on an increase of frame rates performance of 30% SAH and 50% faster than middle split.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130116645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparent-supported radiance regression function","authors":"Xue Qin, Shuangjiu Xiao","doi":"10.1145/2670473.2670498","DOIUrl":"https://doi.org/10.1145/2670473.2670498","url":null,"abstract":"A modified RRF[Ren et al. 2013] rendering method called TsRRF is presented in this paper, which support global illumination in realtime for scenes with moving transparent objects. The key idea of this method is to augment the map between object and scene. There are two kinds of method to augment this map. First, we choose different attributes which can represent the true color of an object and the relationship with the whole scene in space, and at the same time in order to get these attributes, we use GPGPU to get real time information. Second we use deep learning to get the most important information from the sample data which can decrease the overfitting. In order to get more details and make full use of the sample data, we not only partition the scene by position, but also partition by object, and we will use different TsRRF to render different light effect like reflection or refraction. The network forward propagate process will also be put into the GPU and use the parallel feature to calculate quickly. As a result, the modified method works well when dealing with the transparent objects and have a real time effect.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132892897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D fluid scene synthesis and animation","authors":"Hongyan Quan, Xiao Song, M. Yu, Yahui Song","doi":"10.1145/2670473.2670508","DOIUrl":"https://doi.org/10.1145/2670473.2670508","url":null,"abstract":"Realistic fluid scene modeling is necessary for virtual reality application. Large 3D fluid scene modeling in low performance computer with real time remains a challenge. Here we present an approach for synthesizing large 3D fluid scene with example of frame in video. Both rich of realistic texture in video frame and height field of fluid surface are employed to study. Realistic textures can enhance the synthesized fluid appearance, whereas the height field of fluid surface enables the generation of complex geometry and stochastic movement on the surface. We take advantage of fluid wave theory to study and extract wave elements from fluid surface of example frame. The extracted wave elements are clustered and rearranged into the synthesized result. MST (Minimum Spanning Tree) of wave element classes is constituted to keep local continuity to fluid surface. We demonstrate our synthesis results for different scales and different types of large 3D fluid scenes synthesis in several challenging scenarios.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115288216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Zhang, Wei Ma, Luwei Yang, Lijuan Duan, Jianli Liu
{"title":"Graph-cut based interactive image segmentation with texture constraints","authors":"Yu Zhang, Wei Ma, Luwei Yang, Lijuan Duan, Jianli Liu","doi":"10.1145/2670473.2670474","DOIUrl":"https://doi.org/10.1145/2670473.2670474","url":null,"abstract":"In the paper, we present a method of interactive image segmentation with texture constraints in the framework of graph cut. Given an image, we first gather user-marked information to establish the color and texture prior models of the foreground and background. Then, we formulate an energy function composed of color, gradient and texture terms. At last, the foreground is extracted by minimizing the energy function using graph cut. In the energy function, the texture term describes the difference between the texture prior models and the texture descriptors of each pixel to be labeled. The foreground/background texture prior model is represented as histograms of Local Binary Patterns (LBP). Every pixel to be labeled in the image has a foreground and a background texture descriptor, which are obtained by a randomized texton-searching algorithm. The newly added texture term is effective to overcome the difficulty in locating real boundaries when dealing with textured foreground/background. Experimental results demonstrate that, with the same amount of user interaction, our method generates better results than traditional ones.","PeriodicalId":240292,"journal":{"name":"International Conference on Virtual Reality Continuum and its Applications in Industry","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}