Anastassia Andreasen, Michele Geronazzo, Niels Christian Nilsson, Jelizaveta Zovnercuka, Kristian Konovalov, Stefania Serafin
{"title":"Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies.","authors":"Anastassia Andreasen, Michele Geronazzo, Niels Christian Nilsson, Jelizaveta Zovnercuka, Kristian Konovalov, Stefania Serafin","doi":"10.1109/TVCG.2019.2898787","DOIUrl":null,"url":null,"abstract":"<p><p>Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1876-1886"},"PeriodicalIF":4.7000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898787","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Visualization and Computer Graphics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TVCG.2019.2898787","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/2/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 7
Abstract
Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.
期刊介绍:
TVCG is a scholarly, archival journal published monthly. Its Editorial Board strives to publish papers that present important research results and state-of-the-art seminal papers in computer graphics, visualization, and virtual reality. Specific topics include, but are not limited to: rendering technologies; geometric modeling and processing; shape analysis; graphics hardware; animation and simulation; perception, interaction and user interfaces; haptics; computational photography; high-dynamic range imaging and display; user studies and evaluation; biomedical visualization; volume visualization and graphics; visual analytics for machine learning; topology-based visualization; visual programming and software visualization; visualization in data science; virtual reality, augmented reality and mixed reality; advanced display technology, (e.g., 3D, immersive and multi-modal displays); applications of computer graphics and visualization.