{"title":"A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform","authors":"Keito Morisaki, Wataru Wakita","doi":"10.1145/3489849.3489903","DOIUrl":"https://doi.org/10.1145/3489849.3489903","url":null,"abstract":"We evaluate how highly realistic the inclination of the ground can be perceived with our simple VR walking platform. Firstly we prepared seven maps with different ground inclinations of -30 to 30 degrees and every 10 degrees. Then we conducted a perception experiment of the inclination feeling with each of the treadmill and our proposed platform, and questionnaire evaluation about the presence, the fatigue, and the exhilaration. As a result, it was clarified that even if our proposed platform is used, not only the feeling of presence equivalent to that of the treadmill can be felt, but also the inclination of the ground up and down can be perceived.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"21 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114129452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Giunchi, Riccardo Bovo, Panayiotis Charalambous, F. Liarokapis, A. Shipman, Stuart James, A. Steed, T. Heinis
{"title":"Perceived Realism of Pedestrian Crowds Trajectories in VR","authors":"D. Giunchi, Riccardo Bovo, Panayiotis Charalambous, F. Liarokapis, A. Shipman, Stuart James, A. Steed, T. Heinis","doi":"10.1145/3489849.3489860","DOIUrl":"https://doi.org/10.1145/3489849.3489860","url":null,"abstract":"Crowd simulation algorithms play an essential role in populating Virtual Reality (VR) environments with multiple autonomous humanoid agents. The generation of plausible trajectories can be a significant computational cost for real-time graphics engines, especially in untethered and mobile devices such as portable VR devices. Previous research explores the plausibility and realism of crowd simulations on desktop computers but fails to account the impact it has on immersion. This study explores how the realism of crowd trajectories affects the perceived immersion in VR. We do so by running a psychophysical experiment in which participants rate the realism of real/synthetic trajectories data, showing similar level of perceived realism.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121406531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ImNDT: Immersive Workspace for the Analysis of Multidimensional Material Data From Non-Destructive Testing","authors":"Alexander Gall, E. Gröller, C. Heinzl","doi":"10.1145/3489849.3489851","DOIUrl":"https://doi.org/10.1145/3489849.3489851","url":null,"abstract":"An analysis of large multidimensional volumetric data as generated by non-destructive testing (NDT) techniques, e.g., X-ray computed tomography (XCT), can hardly be evaluated using standard 2D visualization techniques on desktop monitors. The analysis of fiber-reinforced polymers (FRPs) is currently a time-consuming and cognitively demanding task, as FRPs have a complex spatial structure, consisting of several hundred thousand fibers, each having more than twenty different extracted features. This paper presents ImNDT, a novel visualization system, which offers material experts an immersive exploration of multidimensional secondary data of FRPs. Our system is based on a virtual reality (VR) head-mounted device (HMD) to enable fluid and natural explorations through embodied navigation, the avoidance of menus, and manual mode switching. We developed immersive visualization and interaction methods tailored to the characterization of FRPs, such as a Model in Miniature, a similarity network, and a histo-book. An evaluation of our techniques with domain experts showed advantages in discovering structural patterns and similarities. Especially novices can strongly benefit from our intuitive representation and spatial rendering.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124345282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GazeMOOC: A Gaze Data Driven Visual Analytics System for MOOC with XR Content","authors":"Hao Wang, Yaqi Xie, Mingqi Wen, Zhuo Yang","doi":"10.1145/3489849.3489923","DOIUrl":"https://doi.org/10.1145/3489849.3489923","url":null,"abstract":"MOOC is widely used and more popular after COVID-19.In order to improve the learning effect, MOOC is evolving with XR technologies such as avatars, virtual scenes and experiments. This paper proposes a novel visual analytics system GazeMOOC, that can evaluate learners’ learning engagement in MOOC with XR content. For same MOOC content, gaze data of all learners are recorded and clustered. By differentiating gaze data of distracted learners and active learners, GazeMOOC can help evaluate MOOC content and learners’ learning engagement.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Table-Based Interactive System for Augmenting Japanese Food Culture Experience","authors":"Kei Kobayashi, Kazuma Nagata, Junichi Hoshino","doi":"10.1145/3489849.3489941","DOIUrl":"https://doi.org/10.1145/3489849.3489941","url":null,"abstract":"Washoku, traditional Japanese food culture, was evaluated as a social custom for food that embodies the Japanese spirit of respect for nature and was registered as a UNESCO Intangible Cultural Heritage in 2013. However, an actual meal is limited to taste and visual information such as taste, ingredients, tableware, and arrangements; it is difficult to become thoroughly familiar with the cultural characteristics of Japanese cuisine. This study achieved a system that conveys the characteristics of Japanese cuisine, such as the importance of seasonality and ingredients, by displaying the cultural background related to food in text form. The natural environment is projected by a projector on a table, and seasons progress as the meal advances. The food was created in consultation with the chef to be suitable for the system. The users who participated in our survey and experienced the system were conveyed that Japanese cuisine is supported by the richness and seasons of nature and that it also affects traditional events.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131735115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Kern, M. Popp, Peter Kullmann, Elisabeth Ganal, Marc Erich Latoschik
{"title":"3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus","authors":"Florian Kern, M. Popp, Peter Kullmann, Elisabeth Ganal, Marc Erich Latoschik","doi":"10.1145/3489849.3489949","DOIUrl":"https://doi.org/10.1145/3489849.3489949","url":null,"abstract":"This article introduces the accessory dock, a 3D printed multi-purpose extension for consumer-grade XR controllers that enables flexible mounting of self-made and commercial accessories. The uniform design of our concept opens new opportunities for XR systems being used for more diverse purposes, e.g., researchers and practitioners could use and compare arbitrary XR controllers within their experiments while ensuring access to buttons and battery housing. As a first example, we present a stylus tip accessory to build an XR Stylus, which can be directly used with frameworks for handwriting, sketching, and UI interaction on physically aligned virtual surfaces. For new XR controllers, we provide instructions on how to adjust the accessory dock to the controller’s form factor. A video tutorial for the construction and the source files for 3D printing are publicly available for reuse, replication, and extension (https://go.uniwue.de/hci-otss-accessory-dock).","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133056814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Payod Panda, Byungsoo Kim, Sana Behnam-Asl, Elif Sener
{"title":"Immersive Analytics: A User-Centered Perspective","authors":"Payod Panda, Byungsoo Kim, Sana Behnam-Asl, Elif Sener","doi":"10.1145/3489849.3489951","DOIUrl":"https://doi.org/10.1145/3489849.3489951","url":null,"abstract":"Researchers have explored using VR and 3D data visualizations for analyzing and presenting data for several decades. Surveys of the literature in the field usually adopt a technical or systemic lens. We propose a survey of the Immersive Analytics literature from the user’s perspective that relates the purpose of the visualization to its technical qualities. We present our preliminary review to describe how device technologies, kinds of representation, collaborative features, and research design have been utilized to accomplish the purpose of the visualization. This poster demonstrates our preliminary investigation, inviting feedback from the VRST community. Our hope is the final version of our review will benefit designers, developers, and practitioners who want to implement immersive visualizations from a Human-Centered Design perspective, and help Immersive Analytics researchers get a better understanding of the gaps in current literature.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132688509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of User’s Gaze on the Unintended Positional Drift in Walk-in-Place","authors":"Donghyeon Kim, Hyeong-geon Kim, Myungho Lee","doi":"10.1145/3489849.3489928","DOIUrl":"https://doi.org/10.1145/3489849.3489928","url":null,"abstract":"Walk-In-Place (WIP) is a technique in which users perform walking or jogging-like movements in a stationary place to move around in virtual environments (VEs). However, unintended positional drift (UPD) while performing WIP often occurs, thus weakening its benefits of keeping users in a fixed position in a physical space. In this paper, we present our preliminary study exploring whether users’ gaze while WIP affects the direction of the UPD. Participants of the study jogged in a VE five times. Each time, we manipulated their gaze direction by displaying visual information in 5 different locations in their view. Although a correlation between the gaze and UPD direction was not found, we report the results from this study, including the amount of observed drift and preferred location of visual information, and discuss future research directions.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133308974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impostor-based Rendering Acceleration for Virtual, Augmented, and Mixed Reality","authors":"Martin Mišiak, Arnulph Fuhrmann, M. Latoschik","doi":"10.1145/3489849.3489865","DOIUrl":"https://doi.org/10.1145/3489849.3489865","url":null,"abstract":"This paper presents an image-based rendering approach to accelerate rendering time of virtual scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors, light-weight image-based representations leveraging geometry and shading related processing costs. In contrast to their classical implementation, our impostors are specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios (XR for short), as they support stereoscopic rendering to provide correct depth perception. Motion parallax of typical head movements is compensated by using a ray marched parallax correction step. Our approach provides a dynamic run-time recreation of impostors as necessary for larger changes in view position. The dynamic run-time recreation is decoupled from the actual rendering process. Hence, its associated processing cost is therefore distributed over multiple frames. This avoids any unwanted frame drops or latency spikes even for impostors of objects with complex geometry and many polygons. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"10 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Presenting Sense of Loud Vocalization Using Vibratory Stimuli to the Larynx and Auditory Stimuli","authors":"Yuki Shimomura, Yuki Ban, S. Warisawa","doi":"10.1145/3489849.3489891","DOIUrl":"https://doi.org/10.1145/3489849.3489891","url":null,"abstract":"In recent years, technologies related to virtual reality (VR) have continued to advance. As a method to enhance the VR experience, we focused on loud vocalization. This is because we believe that loud vocalization can enable us to engage with the VR environment in a more interactive way. Also, as loud vocalization is an action that is thought to be closely related to stress reduction and a sense of exhilaration, the stress reduction through VR with loud vocalization is also expected. But loud vocalization itself has disadvantages for physical, mental, and social reasons. Then, we hypothesized that loud vocalization itself is not necessary for such benefits; but the sense of loud vocalization plays an important role. Therefore, we focused on a method of substituting experience by presenting sensory stimuli. In this paper, we proposed a way to present the sense of loud vocalization through vibratory stimuli to the larynx and auditory stimuli to users who are actually vocalizing quietly with the expectation for the sense of loud vocalization. Our user study showed that the proposed method can extend the sense of vocalization and realize pseudo-loud vocalization. In addition, it was also shown that the proposed method can cause a sense of exhilaration. By contrast, excessively strong vibratory stimuli spoil the sense of loud vocalization, and thus the intensity of the vibration should be appropriately determined.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"415 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124817649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}