Simon Crowle, Alexandros Doumanoglou, Benjamin Poussard, M. Boniface, D. Zarpalas, P. Daras
{"title":"Dynamic adaptive mesh streaming for real-time 3D teleimmersion","authors":"Simon Crowle, Alexandros Doumanoglou, Benjamin Poussard, M. Boniface, D. Zarpalas, P. Daras","doi":"10.1145/2775292.2775296","DOIUrl":"https://doi.org/10.1145/2775292.2775296","url":null,"abstract":"Recent advances in full body 3D reconstruction methods have lead to the realisation of high quality, real-time, photo realistic capture of users in a range of tele-immersion (TI) contexts including gaming and mixed reality environments. The full body reconstruction (FBR) process is computationally expensive requiring comparatively high CPU, GPU and network resources in order to maintain a shared, virtual reality in which high quality 3D reproductions of users can be rendered in real-time. A significant optimisation of the delivery of FBR content has been achieved through the real-time compression and de-compression of 3D geometry and textures. Here we present a new, adaptive compression methodology that allows a TI system called 3D-LIVE to modify the quality and speed of a FBR TI pipeline based on the data carrying capability of the network. Our rule-based adaptation strategy uses network performance sampling processes and a configurable rule engine to dynamically alter the compression of FBR reconstruction on-the-fly. We demonstrate the efficacy of the approach with an experimental evaluation of system and conclude with a discussion of future directions for adaptive FBR compression.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128050994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Feuerstack, Allan Oliveira, Mauro dos Santos Anjo, R. B. Araujo, E. Pizzolato
{"title":"Model-based design of multimodal interaction for augmented reality web applications","authors":"S. Feuerstack, Allan Oliveira, Mauro dos Santos Anjo, R. B. Araujo, E. Pizzolato","doi":"10.1145/2775292.2775293","DOIUrl":"https://doi.org/10.1145/2775292.2775293","url":null,"abstract":"Despite the increasing use of Augmented Reality (AR) in many different application areas, implementation support is limited and still driven by development at source-code level. Although efforts have been made to overcome these limitations, there is a clear gap between authoring environments and source code level framework approaches for creating AR interfaces for the web with multimodal control. Model-based design for interaction can offer support to fill this gap between authoring environments and frameworks. However, to the best of our knowledge, a declarative and model-driven design (MDD) has not yet been applied to model AR interfaces for a wide spectrum of modes. Thus, this paper presents an extension of the model-driven design to cope with interactors, whose novelty lies on the introduction of a modeling approach targeted at AR developers and designers in their task to design new forms of interactions that can be later used in authoring environments. To validate our approach, we demonstrate how a reality spanning Drag-and-Drop interaction can be modeled for an online furniture shop. And we implemented a gesture based control to show how new control modes can be added to an existing MDD-based design to extend the interaction capabilities.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas F. Polys, R. B. Knapp, Matthew Bock, Christina Lidwin, Dane Webster, Nathan Waggoner, I. Bukvic
{"title":"Fusality: an open framework for cross-platform mirror world installations","authors":"Nicholas F. Polys, R. B. Knapp, Matthew Bock, Christina Lidwin, Dane Webster, Nathan Waggoner, I. Bukvic","doi":"10.1145/2775292.2775317","DOIUrl":"https://doi.org/10.1145/2775292.2775317","url":null,"abstract":"As computing and displays become more pervasive and wireless networks are increasing the connections between people and things, humans inhabit both digital and physical realities. In this paper we describe our prototype Mirror Worlds framework, which is designed to fuse these realities: Fusality. Our goal for Fusality is to support innovative research and exhibitions in presence and collaboration, sensors and smart buildings and mixed reality in applications from engineering to art. By fusing live sensor data from the building and its occupants with online 3D environments and participants, we demonstrate a first-principles approach to online multi-entity messaging communication. This demonstration shows how the variety of Mirror Worlds clients can be supported through the open Web architecture. These technologies enable new possibilities for collaboration as well as directions for interoperability. Finally, we layout out our research agenda for the framework and discuss its transformative applications.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"27 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120845801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Limper, Florian Brandherm, D. Fellner, Arjan Kuijper
{"title":"Evaluating 3D thumbnails for virtual object galleries","authors":"Max Limper, Florian Brandherm, D. Fellner, Arjan Kuijper","doi":"10.1145/2775292.2775314","DOIUrl":"https://doi.org/10.1145/2775292.2775314","url":null,"abstract":"Virtual 3D object galleries on the Web nowadays often use real-time, interactive 3D graphics. However, this does usually still not hold for their preview images, sometimes referred to as thumbnails. We provide a technical analysis on the applicability of so-called 3D thumbnails within the context virtual 3D object galleries. Like a 2D thumbnail for an image, a 3D thumbnail acts as a compact preview for a real 3D model. In contrast to an image series, however, it enables a wider variety of interaction methods and rendering effects. By performing a case study, we show that such true 3D representations are, under certain circumstances, even able to outperform 2D image series in terms of bandwidth consumption. We thus present a complete pipeline for generating compact 3D thumbnails for given meshes in a fully automatic fashion.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130702698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Behr, C. Mouton, S. Parfouru, J. Champeau, Clotilde Jeulin, Maik Thöner, Christian Stein, Michael Schmitt, Max Limper, Miguel de Sousa, T. Franke, G. Voss
{"title":"webVis/instant3DHub: visual computing as a service infrastructure to deliver adaptive, secure and scalable user centric data visualisation","authors":"J. Behr, C. Mouton, S. Parfouru, J. Champeau, Clotilde Jeulin, Maik Thöner, Christian Stein, Michael Schmitt, Max Limper, Miguel de Sousa, T. Franke, G. Voss","doi":"10.1145/2775292.2775299","DOIUrl":"https://doi.org/10.1145/2775292.2775299","url":null,"abstract":"This paper presents the webVis/instant3DHub platform, which combines a novel Web-Components based framework and a Visual Computing as a Service infrastructure to deliver an interactive 3D data visualisation solution. The system focuses on minimising resource consumption, while maximising the end-user experience. It utilises an adaptive and automated combination of client, server and hybrid visualisation techniques, while orchestrating transmission, caching and rendering services to deliver structural and semantically complex data sets on any device class and network architecture. The API and Web Component framework allow the application developer to compose and manipulate complex data setups with a simple set of commands inside the browser, without requiring knowledge about the underlying service infrastructure, interfaces and the fully automated processes. This results in a new class of interactive applications, built around a canvas for real-time visualisation of massive data sets.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114029385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid visualisation of digital production big data","authors":"A. Evans, J. Agenjo, J. Blat","doi":"10.1145/2775292.2775319","DOIUrl":"https://doi.org/10.1145/2775292.2775319","url":null,"abstract":"In this paper, we present a web application for the hybrid visualisation of digital production Big Data. In a typical film or television production, several terabytes of data can be recorded per day, such as film footage from multiple cameras or background information regarding the set. Interactive visualisation of this multimodal data, integrating 2D (image and video) and 3D graphics modes, would result in enhanced use. A browser-based context is capable of this integration in a seamless and powerful manner, but faces significant challenges related to data transfer and compression which must be overcome. This paper presents an application designed to harness the power of a hybrid web context while attempting to overcome or compensate for the difficulties of data transfer limitations and rendering power. Results are presented from three, publicly available test datasets, which represent a realistic sample of data recorded on a typical high-budget production set.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114754400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matt Adcock, Stuart Anderson, S. Berkovsky, Paul Flick, Dennis Frousheger, Brett Grandbois, C. Gunn, Dave Haddon, Jane Li, Thomas Lowe, Benjamin Mackey, Fred Pauling, Christian Richter, Kazys Stepanas, Fletcher Talbot, Gavin Walker, B. Ward, Luke Tomes, Sally Miles, Daniel Keogh, B. Colenso, Hay Wie Lie, D. Longmore, Jayden Hanly, D. Canavan
{"title":"Exploring the Jenolan Caves: bringing the physical world to 3D online education","authors":"Matt Adcock, Stuart Anderson, S. Berkovsky, Paul Flick, Dennis Frousheger, Brett Grandbois, C. Gunn, Dave Haddon, Jane Li, Thomas Lowe, Benjamin Mackey, Fred Pauling, Christian Richter, Kazys Stepanas, Fletcher Talbot, Gavin Walker, B. Ward, Luke Tomes, Sally Miles, Daniel Keogh, B. Colenso, Hay Wie Lie, D. Longmore, Jayden Hanly, D. Canavan","doi":"10.1145/2775292.2778298","DOIUrl":"https://doi.org/10.1145/2775292.2778298","url":null,"abstract":"In August 2014, CSIRO and 3P Learning (through subsidiary IntoScience) launched what is probably Australia's biggest (and arguably coolest) school excursion ever. In classrooms around the country, students can now set out to explore the spectacular Jenolan Caves located in the scenic Blue Mountains. Students are immersed, via the web, in an authentic 3D digital recreation of the Jenolan Caves to discover the science behind cave formation.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133844622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Animation on the web: a survey","authors":"Amit L. Ahire, A. Evans, J. Blat","doi":"10.1145/2775292.2775298","DOIUrl":"https://doi.org/10.1145/2775292.2775298","url":null,"abstract":"The main motivation of this paper is to provide a current state and a brief overview of animation on the web. Computer animation is used in many fields and it has seen a lot of development in the recent years. With the widespread use of WebGL and the age of powerful modern hardware available on small devices, 3D rendering on the browser is now becoming commonplace. Computer Animation can be described as the rendering of objects on screen, which can change shape and properties with respect to time. There are many approaches to rendering animation on the web, but none of them yet provide a coherent approach in terms of transmission, compression and handling of the animation data on the client side (browser). And if computer animation has to become more accessible over the web, these challenges need to be addressed in the same \"minimalistic manner (requirement wise)\" as every other multimedia content has been addressed on the web. We aim to provide an overview of the current state of the art, while commenting on the shortcomings pertaining to current formats/approaches and discuss some of the upcoming standards and trends which can help with the current implementation.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130537883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Concolato, J. L. Feuvre, Emmanouil Potetsianakis
{"title":"Synchronized delivery of 3D scenes with audio and video","authors":"C. Concolato, J. L. Feuvre, Emmanouil Potetsianakis","doi":"10.1145/2775292.2775324","DOIUrl":"https://doi.org/10.1145/2775292.2775324","url":null,"abstract":"Nowadays, 3D graphics have established their presence on the web - alongside audio and video. In fact, 3D scenes are often used in conjunction with audio and video, to create virtual worlds. However, the diverse nature of these various media components raises synchronization and packaging challenges. In order to address these challenges, we propose packaging 3D scenes, with audio and video, inside MP4 containers. This way, the 3D and other media are delivered as a whole, and on the receiving end, we are able to extract and synchronize the content, from within the browser. In this paper, we explain our methodology, present an end-to-end example scenario, and its associated implementation, using open-source tools.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"241 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113998202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volumetric texture data compression scheme for transmission","authors":"Yeonsoo Yang, Ankit Sharma, Armand Girier","doi":"10.1145/2775292.2775323","DOIUrl":"https://doi.org/10.1145/2775292.2775323","url":null,"abstract":"We present a texture data compression technique for transmission of volumetric weather radar data. High resolution volume rendering of such time varying volumetric data sets requires large size and number of texture files. In case of WebGL based volume rendering on web browsers, performance problems occur due to the latencies associated with loading of 3D assets, especially transmission of huge texture files. Existing compression technique can help to solve this problem. The most relevant work is S3TC texture compression. As an advanced compression scheme, we combine the S3TC compression method with efficient encoding of volume data in the RGBA channels of an image followed by DEFLATE compression to further reduce the file size. We show how this optimized scheme with X3D/X3DOM extensions fits into our weather data visualization application requirements and provide experimental results.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122674286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}