Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry最新文献

筛选
英文 中文
Sodeisha Sculptural Ceramics: Digitalization and VR Interaction Sodeisha雕塑陶瓷:数字化与VR互动
Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson
{"title":"Sodeisha Sculptural Ceramics: Digitalization and VR Interaction","authors":"Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson","doi":"10.1145/3359997.3365741","DOIUrl":"https://doi.org/10.1145/3359997.3365741","url":null,"abstract":"This demonstration presents the development of a virtual reality (VR) research project for the VR interaction and digitization of “Sodeisha Sculptural Ceramics”, a transmedia approach showcases photogrammetry scanned Japanese ceramic artworks in an educational and public VR exhibition setting. The early prototype has involved the photogrammetry scanning of 10 sculptural ceramic works of art. These works were created by the innovative Japanese post-war artist group, known as ‘Sodeisha’. Newcastle Art Gallery holds one of the largest collections of Sodeisha ceramics outside of Japan and recently featured the collection in a large-scale exhibition titled SODEISHA: connected to Australia from March – May 2019. The audience used controllers to interact with objects in a virtual environment, with the option of seeing a pair of VR hands or full VR arms.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Reality for Midwifery Learning: MR VR Demonstration 助产学学习的扩展现实:MR VR演示
Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy
{"title":"Extended Reality for Midwifery Learning: MR VR Demonstration","authors":"Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy","doi":"10.1145/3359997.3365739","DOIUrl":"https://doi.org/10.1145/3359997.3365739","url":null,"abstract":"This demonstration presents a development of a Mixed Reality (MR) and Virtual Reality (VR) research project for midwifery student learning, and a novel approach for showing extended reality content in an educational setting. The Road to Birth (RTB) visualises the changes that occur in the female body during pregnancy, and the five days immediately after birth (postpartum) in a detailed 3D setting. In the Base Anatomy studio, users can observe the base anatomical layers of an adult female. In Pregnancy Timeline, they can scroll through the weeks of gestation to see the development of the baby and the anatomical changes of the mother throughout the pregnancy and postpartum. Finally, users can learn about the different possible birthing positions that may present in Birth Considerations. During the demo, users can experience the system in either MR or VR.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130472749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Motion Volume: Visualization of Human Motion Manifolds 运动体积:人体运动流形的可视化
Masaki Oshita
{"title":"Motion Volume: Visualization of Human Motion Manifolds","authors":"Masaki Oshita","doi":"10.1145/3359997.3365684","DOIUrl":"https://doi.org/10.1145/3359997.3365684","url":null,"abstract":"The understanding of human motion is important in many areas such as sports, dance, and animation. In this paper, we propose a method for visualizing the manifold of human motions. A motion manifold is defined by a set of motions in a specific motion form. Our method visualizes the ranges of time-varying positions and orientations of a body part by generating volumetric shapes for representing them. It selects representative keyposes from the keyposes of all input motions to visualize the range of keyposes at each key timing. A geometrical volume that contains the trajectories from all input motions is generated for each body part. In addition, a geometrical volume that contains the orientations from all input motions is generated for a sample point on the trajectory. The user can understand the motion manifold by visualizing these motion volumes. In this paper, we present some experimental examples for a tennis shot form.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"609 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126699162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System 基于肢体动作合成系统的舞蹈音乐创作与现场表演
A. Soga
{"title":"Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System","authors":"A. Soga","doi":"10.1145/3359997.3365749","DOIUrl":"https://doi.org/10.1145/3359997.3365749","url":null,"abstract":"We developed a Body-part Motion Synthesis System (BMSS), which allows users to create choreography by synthesizing body-part motions and to simulate them in 3D animation. To explore the possibilities of using BMSS for creative activities, two dances with different concepts were created and performed by a dancer and a musician. We confirmed that BMSS might be able to generate effective choreographic motions for dance and easily and quickly to support its creation. Moreover, creation using BMSS might fuel new collaboration or interaction between dancers and musicians.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126250615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality 沉浸式增强现实中分层菜单的可视化和交互
Majid Pourmemar, Charalambos (Charis) Poullis
{"title":"Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality","authors":"Majid Pourmemar, Charalambos (Charis) Poullis","doi":"10.1145/3359997.3365693","DOIUrl":"https://doi.org/10.1145/3359997.3365693","url":null,"abstract":"Graphical User Interfaces (GUIs) have long been used as a way to inform the user of the large number of available actions and options. GUIs in desktop applications traditionally appear in the form of two-dimensional hierarchical menus due to the limited screen real estate, the spatial restrictions imposed by the hardware e.g. 2D, and the available input modalities e.g. mouse/keyboard point-and-click, touch, dwell-time etc. In immersive Augmented Reality (AR), there are no such restrictions and the available input modalities are different (i.e. hand gestures, head pointing or voice recognition), yet the majority of the applications in AR still use the same type of GUIs as with desktop applications. In this paper we focus on identifying the most efficient combination of (hierarchical menu type, input modality) to use in immersive applications using AR headsets. We report on the results of a within-subjects study with 25 participants who performed a number of tasks using four combinations of the most popular hierarchical menu types with the most popular input modalities in AR, namely: (drop-down menu, hand gestures), (drop-down menu, voice), (radial menu, hand gestures), and (radial menu, head pointing). Results show that the majority of the participants (60%, 15) achieved a faster performance using the hierarchical radial menu with head pointing control. Furthermore, the participants clearly indicated the radial menu with head pointing control as the most preferred interaction technique due to the limited physical demand as opposed to the current de facto interaction technique in AR i.e. hand gestures, which after prolonged use becomes physically demanding leading to arm fatigue known as ’Gorilla arms’.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114138518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Bowl-Shaped Display for Controlling Remote Vehicles 一种用于遥控车辆的碗状显示器
Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker
{"title":"A Bowl-Shaped Display for Controlling Remote Vehicles","authors":"Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker","doi":"10.1145/3359997.3365706","DOIUrl":"https://doi.org/10.1145/3359997.3365706","url":null,"abstract":"This paper proposes a bowl-shaped hemispherical display to observe omnidirectional images. This display type has many advantages over conventional, flat 2D displays, in particular when it is used for controlling remote vehicles. First, it allows users to observe an azimuthal equidistant view of omnidirectional images by looking from above. Second, it provides a first-person view by looking into the inside of the hemispherical surface from diagonally above. Third, it provides a pseudo–third-person view as if we watched the remote vehicle from its back, by observing both the inside and outside at the same time from obliquely above. These characteristics solve the issues of blind angles around the remote vehicle. We conduct a VR-based user study to compare the bowl-shaped display to an equirectangular projection on a 2D display and a first-person view used in head-mounted displays. Based on the insights gained in the study, we present a real-world implementation and describe the uniqueness, advantages but also shortcomings of our method.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"52 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127572287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi–Modal High–End Visualization System 多模态高端可视化系统
Conan Bourke, T. Bednarz
{"title":"Multi–Modal High–End Visualization System","authors":"Conan Bourke, T. Bednarz","doi":"10.1145/3359997.3365731","DOIUrl":"https://doi.org/10.1145/3359997.3365731","url":null,"abstract":"This paper describes a production-grade software toolkit used for shared multi-model visualization systems developed by the Expanded Perception and Interaction Centre. Our High-End Visualization System (HEVS) can be used as a framework to enable content to be run transparently on a wider range of platforms (Figure 2) with fewer compatibility issues and dependencies on commercial software. Content can be transferred more easily from large screens (including cluster-driven systems) such as CAVE-like platforms, hemispherical domes, and projected cylindrical displays through to multi-wall displays and HMDs such as VRR or AR. This common framework is able to provide a unifying approach to visual analytics and visualizations. In addition to supporting multi-modal displays, multiple platforms can be connected to create multi-user collaborative experiences across remotely located labs. We aim to demonstrate multiple projects developed with HEVS that have been deployed to various multi-modal display devices.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
3D Human Avatar Digitization from a Single Image 从单个图像数字化三维人类化身
Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu
{"title":"3D Human Avatar Digitization from a Single Image","authors":"Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu","doi":"10.1145/3359997.3365707","DOIUrl":"https://doi.org/10.1145/3359997.3365707","url":null,"abstract":"With the development of AR/VR technologies, a reliable and straightforward way to digitize three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms. This is impractical for everyday users. In this paper, we propose a pipeline that reconstructs 3D human shape avatar at a glance. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body part from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by applying a silhouette-based dense correspondence. Finally, to infer invisible backside texture from a frontal image, we propose a network we call InferGAN. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own captured data. Our human avatars can be easily rigged and animated using MoCap data. We developed a mobile application that demonstrates this capability in AR/VR settings.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115277820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
From Lab to Field: Demonstrating Mixed Reality Prototypes for Augmented Sports Experiences 从实验室到现场:展示增强运动体验的混合现实原型
Wei Hong Lo, S. Zollmann, H. Regenbrecht, Moritz Loos
{"title":"From Lab to Field: Demonstrating Mixed Reality Prototypes for Augmented Sports Experiences","authors":"Wei Hong Lo, S. Zollmann, H. Regenbrecht, Moritz Loos","doi":"10.1145/3359997.3365728","DOIUrl":"https://doi.org/10.1145/3359997.3365728","url":null,"abstract":"Traditional sports events related data have no direct spatial relationship to what spectators see when attending a live sports event. The idea of our work is to address this gap and ultimately to provide spectators insights of a sports game by embedding sports statistics into their field of view of the game using mobile Augmented Reality. Research in the area of live sport events comes with several challenges such as tracking and visualisation challenges as well as with the challenge that there are only limited opportunities to test and study new features during live games on-site. In this work, we developed a set of prototypes that allow for researching dedicated features for an AR sports spectator experience off-site in the lab before testing them live on the field.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality 体现天气:通过沉浸式多感官虚拟现实促进公众对极端天气的了解
Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, Kening Zhu
{"title":"Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality","authors":"Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, Kening Zhu","doi":"10.1145/3359997.3365718","DOIUrl":"https://doi.org/10.1145/3359997.3365718","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). VRCAI ’19, November 14–16, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-7002-8/19/11. https://doi.org/10.1145/3359997.3365718 ACM Reference Format: Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, and Kening Zhu. 2019. Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality. In The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI ’19), November 14–16, 2019, Brisbane, QLD, Australia. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/ 3359997.3365718","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127274330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信