Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson
{"title":"Sodeisha Sculptural Ceramics: Digitalization and VR Interaction","authors":"Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson","doi":"10.1145/3359997.3365741","DOIUrl":"https://doi.org/10.1145/3359997.3365741","url":null,"abstract":"This demonstration presents the development of a virtual reality (VR) research project for the VR interaction and digitization of “Sodeisha Sculptural Ceramics”, a transmedia approach showcases photogrammetry scanned Japanese ceramic artworks in an educational and public VR exhibition setting. The early prototype has involved the photogrammetry scanning of 10 sculptural ceramic works of art. These works were created by the innovative Japanese post-war artist group, known as ‘Sodeisha’. Newcastle Art Gallery holds one of the largest collections of Sodeisha ceramics outside of Japan and recently featured the collection in a large-scale exhibition titled SODEISHA: connected to Australia from March – May 2019. The audience used controllers to interact with objects in a virtual environment, with the option of seeing a pair of VR hands or full VR arms.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy
{"title":"Extended Reality for Midwifery Learning: MR VR Demonstration","authors":"Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy","doi":"10.1145/3359997.3365739","DOIUrl":"https://doi.org/10.1145/3359997.3365739","url":null,"abstract":"This demonstration presents a development of a Mixed Reality (MR) and Virtual Reality (VR) research project for midwifery student learning, and a novel approach for showing extended reality content in an educational setting. The Road to Birth (RTB) visualises the changes that occur in the female body during pregnancy, and the five days immediately after birth (postpartum) in a detailed 3D setting. In the Base Anatomy studio, users can observe the base anatomical layers of an adult female. In Pregnancy Timeline, they can scroll through the weeks of gestation to see the development of the baby and the anatomical changes of the mother throughout the pregnancy and postpartum. Finally, users can learn about the different possible birthing positions that may present in Birth Considerations. During the demo, users can experience the system in either MR or VR.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130472749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion Volume: Visualization of Human Motion Manifolds","authors":"Masaki Oshita","doi":"10.1145/3359997.3365684","DOIUrl":"https://doi.org/10.1145/3359997.3365684","url":null,"abstract":"The understanding of human motion is important in many areas such as sports, dance, and animation. In this paper, we propose a method for visualizing the manifold of human motions. A motion manifold is defined by a set of motions in a specific motion form. Our method visualizes the ranges of time-varying positions and orientations of a body part by generating volumetric shapes for representing them. It selects representative keyposes from the keyposes of all input motions to visualize the range of keyposes at each key timing. A geometrical volume that contains the trajectories from all input motions is generated for each body part. In addition, a geometrical volume that contains the orientations from all input motions is generated for a sample point on the trajectory. The user can understand the motion manifold by visualizing these motion volumes. In this paper, we present some experimental examples for a tennis shot form.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"609 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126699162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System","authors":"A. Soga","doi":"10.1145/3359997.3365749","DOIUrl":"https://doi.org/10.1145/3359997.3365749","url":null,"abstract":"We developed a Body-part Motion Synthesis System (BMSS), which allows users to create choreography by synthesizing body-part motions and to simulate them in 3D animation. To explore the possibilities of using BMSS for creative activities, two dances with different concepts were created and performed by a dancer and a musician. We confirmed that BMSS might be able to generate effective choreographic motions for dance and easily and quickly to support its creation. Moreover, creation using BMSS might fuel new collaboration or interaction between dancers and musicians.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126250615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality","authors":"Majid Pourmemar, Charalambos (Charis) Poullis","doi":"10.1145/3359997.3365693","DOIUrl":"https://doi.org/10.1145/3359997.3365693","url":null,"abstract":"Graphical User Interfaces (GUIs) have long been used as a way to inform the user of the large number of available actions and options. GUIs in desktop applications traditionally appear in the form of two-dimensional hierarchical menus due to the limited screen real estate, the spatial restrictions imposed by the hardware e.g. 2D, and the available input modalities e.g. mouse/keyboard point-and-click, touch, dwell-time etc. In immersive Augmented Reality (AR), there are no such restrictions and the available input modalities are different (i.e. hand gestures, head pointing or voice recognition), yet the majority of the applications in AR still use the same type of GUIs as with desktop applications. In this paper we focus on identifying the most efficient combination of (hierarchical menu type, input modality) to use in immersive applications using AR headsets. We report on the results of a within-subjects study with 25 participants who performed a number of tasks using four combinations of the most popular hierarchical menu types with the most popular input modalities in AR, namely: (drop-down menu, hand gestures), (drop-down menu, voice), (radial menu, hand gestures), and (radial menu, head pointing). Results show that the majority of the participants (60%, 15) achieved a faster performance using the hierarchical radial menu with head pointing control. Furthermore, the participants clearly indicated the radial menu with head pointing control as the most preferred interaction technique due to the limited physical demand as opposed to the current de facto interaction technique in AR i.e. hand gestures, which after prolonged use becomes physically demanding leading to arm fatigue known as ’Gorilla arms’.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114138518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker
{"title":"A Bowl-Shaped Display for Controlling Remote Vehicles","authors":"Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker","doi":"10.1145/3359997.3365706","DOIUrl":"https://doi.org/10.1145/3359997.3365706","url":null,"abstract":"This paper proposes a bowl-shaped hemispherical display to observe omnidirectional images. This display type has many advantages over conventional, flat 2D displays, in particular when it is used for controlling remote vehicles. First, it allows users to observe an azimuthal equidistant view of omnidirectional images by looking from above. Second, it provides a first-person view by looking into the inside of the hemispherical surface from diagonally above. Third, it provides a pseudo–third-person view as if we watched the remote vehicle from its back, by observing both the inside and outside at the same time from obliquely above. These characteristics solve the issues of blind angles around the remote vehicle. We conduct a VR-based user study to compare the bowl-shaped display to an equirectangular projection on a 2D display and a first-person view used in head-mounted displays. Based on the insights gained in the study, we present a real-world implementation and describe the uniqueness, advantages but also shortcomings of our method.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"52 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127572287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi–Modal High–End Visualization System","authors":"Conan Bourke, T. Bednarz","doi":"10.1145/3359997.3365731","DOIUrl":"https://doi.org/10.1145/3359997.3365731","url":null,"abstract":"This paper describes a production-grade software toolkit used for shared multi-model visualization systems developed by the Expanded Perception and Interaction Centre. Our High-End Visualization System (HEVS) can be used as a framework to enable content to be run transparently on a wider range of platforms (Figure 2) with fewer compatibility issues and dependencies on commercial software. Content can be transferred more easily from large screens (including cluster-driven systems) such as CAVE-like platforms, hemispherical domes, and projected cylindrical displays through to multi-wall displays and HMDs such as VRR or AR. This common framework is able to provide a unifying approach to visual analytics and visualizations. In addition to supporting multi-modal displays, multiple platforms can be connected to create multi-user collaborative experiences across remotely located labs. We aim to demonstrate multiple projects developed with HEVS that have been deployed to various multi-modal display devices.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu
{"title":"3D Human Avatar Digitization from a Single Image","authors":"Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu","doi":"10.1145/3359997.3365707","DOIUrl":"https://doi.org/10.1145/3359997.3365707","url":null,"abstract":"With the development of AR/VR technologies, a reliable and straightforward way to digitize three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms. This is impractical for everyday users. In this paper, we propose a pipeline that reconstructs 3D human shape avatar at a glance. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body part from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by applying a silhouette-based dense correspondence. Finally, to infer invisible backside texture from a frontal image, we propose a network we call InferGAN. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own captured data. Our human avatars can be easily rigged and animated using MoCap data. We developed a mobile application that demonstrates this capability in AR/VR settings.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115277820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael G. Nelson, Alexandros Koilias, Sahana Gubbi, Christos Mousas
{"title":"Within a Virtual Crowd: Exploring Human Movement Behavior during Immersive Virtual Crowd Interaction","authors":"Michael G. Nelson, Alexandros Koilias, Sahana Gubbi, Christos Mousas","doi":"10.1145/3359997.3365709","DOIUrl":"https://doi.org/10.1145/3359997.3365709","url":null,"abstract":"This paper presents an exploratory study aiming at investigating the movement behavior of participants when immersed within a virtual crowd. Specifically, a crosswalk scenario was created in which a virtual crowd was scripted to cross the road once the traffic light turned green. Participants were also instructed to walk across the road to the opposite sidewalk. During that time, the assessment of participant movement behavior was captured by the use of objective measurements (time, speed, and deviation). Five density conditions (no density, low density, medium density, high density, and extreme density) were developed to investigate which had the greatest effect on the movement behavior of the participants. The results obtained indicated that the extreme density condition of the virtual crowd did indeed alter the movement behavior of participants to a significant degree. Given that density had the greatest effect on the movement behavior of participants, a follow-up study was also conducted that utilized the density findings and explored whether density can affect the speed and direction of participants. This was achieved through examining five speed conditions and six directional conditions. The follow-up study provided some evidence that during an extreme density condition the speed of the crowd also affects the movement behavior of participants. However, no alteration in human movement behavior was observed when examining the direction of the virtual crowd. Implications for future research are discussed.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Data-Driven Optimisation Approach to Urban Multi-Site Selection for Public Services and Retails","authors":"Tian Feng, Feiyi Fan, T. Bednarz","doi":"10.1145/3359997.3365686","DOIUrl":"https://doi.org/10.1145/3359997.3365686","url":null,"abstract":"Urban lifestyle depends on public services and retails, of which site locations matter to convenience for residents. We introduce a novel approach to the systematic multi-site selection for public services and retails in an urban context. It takes as input a set of data about an urban area and generates an optimal configuration of two-dimensional locations for urban sites on public services and retails. We achieve this goal using data-driven optimisation entangling deep learning. The proposed approach can cost-efficiently generate a multi-site location plan considering representative site selection criteria, including coverage, dispersion and accessibility. It also complies with the local plan and the predicted suitability regarding land-use zoning.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129027388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}