{"title":"A-3DI core: a framework for adding adaptative behaviour into VR applications","authors":"Pierre Boudoin, S. Otmane, M. Mallem, H. Maaref","doi":"10.1145/1477862.1477924","DOIUrl":"https://doi.org/10.1145/1477862.1477924","url":null,"abstract":"VR systems need more and more sensors to provide the most efficient 3D interaction in the virtual world. With the multiplication of sensors in these systems, new constraints emerged, especially how to preserve the 3D interaction continuity. An approach for answering this is to process the huge amount of incoming data and how to estimate a correct interpretation from it. However data processing is often a complex approach. In this paper, we present to you a framework that can be used such an upper-layer on the hardware layer to provide data processing and data fusion to a virtual reality application and so provide an adaptative behaviour to the system.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127566679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparent shader-based Direct3D9 application parallelization for graphics cluster","authors":"Z. Liu, Chenyang Cui, Jiaoying Shi","doi":"10.1145/1477862.1477865","DOIUrl":"https://doi.org/10.1145/1477862.1477865","url":null,"abstract":"Vertex shader and pixel shader are new programmable units of Graphics Processing Unit (GPU). According to their architecture and Direct3D9 application execution flow on single node, we present transparent shader-based Direct3D9 application parallelization strategy. We has divided graphics cluster into two types of logical node, i.e. resource distributing node (D-Node) and resource rendering node (R-Node). Among them, D-Node is responsible for converting Direct3D9 application to six kinds of rendering resource, including command stream, vertex shader, pixel shader, vertex stream, index stream and texture stream, R-Node is responsible for reconstructing Direct3D9 interface rendering command based on the description information and resource data of received rendering resource. Each R-Node distributes rendering task by computing the bounding box of multi-stream based scene data in the screen space. Experimental results have shown that this strategy can realize transparent shader-based Direct3D9 application parallelization and support high-resolution tiled display. In contrast to single node rendering, four nodes parallel rendering based on graphics cluster can not only promote rendering performance but also achieve average speedup at 2.9.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"641 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121985365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic highlight removal in visual hull rendering by Poisson editing","authors":"J. Feng, Bingfeng Zhou","doi":"10.1145/1477862.1477923","DOIUrl":"https://doi.org/10.1145/1477862.1477923","url":null,"abstract":"In image-based visual hull (IBVH) rendering, highlight spots in reference images often cause undesired artifacts in the rendering results. In this paper, we propose a method that can automatically recognize and remove highlight spots from reference images, by utilizing the features of IBVH. First, highlight sub-images are extracted by histogram analyzing; then, their counterparts in other images are calculated. The highlight pixels can be retrieved from their corresponding pixels by pixel blending, and finally be seamlessly integrated back into the target image by resolving a Poisson equation.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127276001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time image-based 3D avatar for immersive game","authors":"Chulhan Lee, Hohyun Lee, Kyoungsu Oh","doi":"10.1145/1477862.1477921","DOIUrl":"https://doi.org/10.1145/1477862.1477921","url":null,"abstract":"We developed an action game which is based on the interaction between real-time 3D avatar of the player and the virtual game characters. The 3D avatar is reconstructed from multi-view images of the player with image-based modeling and rendering techniques. The 3D avatar can be dynamically reconstructed and rendered in real time by using the Hardware-accelerated Visual Hull(HAVH) method. The visual appearances and physical activities of the player are projected onto the avatar. The player can see themselves in the virtual 3D space and interact with the virtual objects through the bodily movements of the avatar in the gaming world. The combination of movement-based interaction and realistic visual appearance makes games more realistic and immersive.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuhong Xu, F. Farbiz, S. Solanki, Xinan Liang, Xuewu Xu
{"title":"Adaptive computation of computer-generated holograms","authors":"Shuhong Xu, F. Farbiz, S. Solanki, Xinan Liang, Xuewu Xu","doi":"10.1145/1477862.1477908","DOIUrl":"https://doi.org/10.1145/1477862.1477908","url":null,"abstract":"This paper proposes an adaptive approach for reducing the computational load of computer-generated holograms (CGHs). Instead of using the whole hologram plate resolution or carrying out point-wise judgment, this approach pre-divides the object space into subspaces and calculates an effective hologram plate region for each subspace according to interference fringe spatial frequency and grating diffraction energy distribution. As both the very low and very high spatial frequency portions are filtered out, the quality of the reconstructed image is even better in terms of sharpness. This space subdivision based approach also makes parallel CGH computing more straightforward. A simple load-balancing strategy is given.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume decomposition and hierarchical skeletonization","authors":"Xiaopeng Zhang, Jianfei Liu, Zili Li, M. Jaeger","doi":"10.1145/1477862.1477885","DOIUrl":"https://doi.org/10.1145/1477862.1477885","url":null,"abstract":"Skeletons and shape components are important shape features, and they are useful for shape description and shape understanding. Techniques to extract these features from volume data are analyzed in this paper based on multiple distance transformations. This work includes an establishment of the hierarchical structure of the object volume, a decomposition of the volume into simple sub-volumes, an extraction of compact skeleton segments corresponding to each independent sub-volume, and a connection of these skeleton segments into a hierarchical structure corresponding to that of the original volume. Applications of this algorithm can be shape recognition, shape measurement, navigation planning, and others.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115655208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanhoon Park, Jihyun Oh, Byung-Kuk Seo, Jong-Il Park
{"title":"Object-adaptive tracking for AR guidance system","authors":"Hanhoon Park, Jihyun Oh, Byung-Kuk Seo, Jong-Il Park","doi":"10.1145/1477862.1477868","DOIUrl":"https://doi.org/10.1145/1477862.1477868","url":null,"abstract":"This paper proposes a model-based object-adaptive tracking method which uses both edges and feature points as vision cues and flexibly adjusts the contribution of each vision cue using a single parameter based on the characteristics of tracking object and the initial conditions. It will be shown that, in many situations where conventional object tracking methods do not work, the proposed method provides reasonably good results. The proposed object-adaptive tracking method worked at 20 fps on UMPC with an average tracking error within 3 pixels when the camera image resolution is 640 by 480 pixels and this real-time capability enabled the proposed method to be successfully applied to an augmented reality (AR) guidance system for the National Science Museum.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124754239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Control theory based real-time rendering","authors":"Gabriyel Wong, Jianliang Wang","doi":"10.1145/1477862.1477913","DOIUrl":"https://doi.org/10.1145/1477862.1477913","url":null,"abstract":"In this paper, we introduce concepts in control theory to provide fundamental mechanisms for sustainable and predictable performance in real-time computer graphics software. The objective is to provide a comprehensive foundation in distilling and adapting control principles for real-time rendering so that new classes of such powerful software may be conceived and developed.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129164872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Occlusion registration in video-based augmented reality","authors":"Jiejie Zhu, Zhigeng Pan","doi":"10.1145/1477862.1477875","DOIUrl":"https://doi.org/10.1145/1477862.1477875","url":null,"abstract":"Augmented Reality overlaps virtual objects on real world. It mixes a real and virtual world which can generate more semantic meanings than either one. To seamlessly merge virtual and real objects, model registration is an important problem. This paper provides a practical approach to do occlusion registration which is a key cue for users to understand the scene. We apply our method to Video-based Augmented Reality where detecting occlusion relationship is challenging because virtual objects are simply superimposed on images of real scenes. By estimating the dense depth of real objects from stereo, results show our approach can efficiently and correctly register virtual and real objects.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125660758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ullah, Nassima Ouramdane, S. Otmane, P. Richard, F. Davesne, M. Mallem
{"title":"Augmenting 3D interactions with haptic guide in a large scale virtual environment","authors":"S. Ullah, Nassima Ouramdane, S. Otmane, P. Richard, F. Davesne, M. Mallem","doi":"10.1145/1477862.1477891","DOIUrl":"https://doi.org/10.1145/1477862.1477891","url":null,"abstract":"Interaction techniques play a vital role in virtual environments' enrichment and have profound effects on the user's performance and sense of presence as well as realism of the Virtual Environment (VE). In this paper we present new haptic guides for object selection. It is utilized to augment the Follow-Me 3D interaction technique dedicated to object selection and manipulation. We divide the VE into three different zones (free manipulation, visual and haptic assistance zones). Each one of the three zones is characterized by a specific interaction granularity which defines the properties of the interaction in the concerned zone. This splitting of VE is aimed to have both precision and assistance (zones of visual and haptic guidance) near the object to reach or to manipulate and to maintain a realistic and free interaction in the VE (free manipulation zone). The haptic and visual guides assist the user in object selection. The paper presents two different models of the haptic guides, one for free and multidirectional selection and the second for precise and single direction selection. The evaluation and comparison of these haptic guides are given and their effect on the user's performance in object selection in VE is investigated.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"24 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123049762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}