{"title":"Scenography of immersive virtual musical instruments","authors":"Florent Berthaut, Victor Zappi, D. Mazzanti","doi":"10.1109/SIVE.2014.7006285","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006285","url":null,"abstract":"Immersive Virtual Musical Instruments (IVMIs) can be considered as the meeting between Music Technology and Virtual Reality. Being both musical instruments and elements of Virtual Environments, IVMIs require a transversal approach from their designers, in particular when the final aim is to play them in front of an audience, as part of a scenography. In this paper, we combine the main constraints of musical performances and Virtual Reality applications into a set of dimensions, meant to extensively describe IVMIs stage setups. A number of existing stage setups are then classified using these dimensions, explaining how they were used to showcase live virtual performances and discussing their scenographic level.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130210166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Simon, Florian Nouviale, R. Gaugne, V. Gouranton
{"title":"Sonic interaction with a virtual orchestra of factory machinery","authors":"L. Simon, Florian Nouviale, R. Gaugne, V. Gouranton","doi":"10.1109/SIVE.2014.7006283","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006283","url":null,"abstract":"This paper presents an immersive application where users receive sound and visual feedbacks on their interactions with a virtual environment. In this application, the users play the part of conductors of an orchestra of factory machines since each of their actions on interaction devices triggers a pair of visual and audio responses. Audio stimuli were spatialized around the listener. The application was exhibited during the 2013 Science and Music day and designed to be used in a large immersive system with head tracking, shutter glasses and a 10.2 loudspeaker configuration.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wave-based sound propagation for VR applications","authors":"Ravish Mehra, Dinesh Manocha","doi":"10.1109/SIVE.2014.7006289","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006289","url":null,"abstract":"Realistic sound effects are extremely important in VR to improve the sense of presence and immersion. They augment the visual sense of the user and can help reduce simulation fatigue. Sound can provide 3D spatial cues outside eld of view and help create high-fidelity VR training simulations. Current sound propagation techniques are based on heuristic approaches or simple line-of-sight based geometric techniques. These techniques cannot capture important sound effects such as diffraction, interference, focusing. For VR applications, there is a need for high-fidelity, accurate sound propagation. In order to model sound propagation accurately, it is important to model wave-based sound propagation. We present a set of efficient wave-based propagation techniques for VR applications that can handle large scenes, directional sound sources, and generate spatial sound. Our technique has been integrated in Valve's game engine and we use it to demonstrate realistic acoustic effects such as diffraction, high-order re ection, interference, directivity, and spatialization, in complex scenarios.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122379757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping and interaction strategies for performing environmental sound","authors":"Christian Heinrichs, Andrew Mcpherson","doi":"10.1109/SIVE.2014.7006286","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006286","url":null,"abstract":"While the design of computational audio models for real-time generation of sound has been gaining increasing attention in the field of virtual reality and games over the last few years, questions related to expressivity and human performability have remained largely unexplored. Unlike in the design of interactive sonic artefacts a performable model requires a different approach to parametrisation and interaction. A model of a squeaking door is presented along with three contrasting mapping strategies between a generic touch-based interface and parameters controlling phenomenologically meaningful sound qualities. Each of these mapping strategies is evaluated in a controlled study based around a set of four metrics proposed by the authors. Correlations between quantitative and qualitative data verify the evaluation procedure for each of these metrics.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"915 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126976399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norberto Degara, Thimmaiah Kuppanda, Timothy Neate, Jiajun Yang, Andres Torres
{"title":"Reproducible sonification for virtual navigation","authors":"Norberto Degara, Thimmaiah Kuppanda, Timothy Neate, Jiajun Yang, Andres Torres","doi":"10.1109/SIVE.2014.7006288","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006288","url":null,"abstract":"The use of sonification for navigation, localization and obstacle avoidance is considered to be one of the most important tasks in auditory display research for its potential application to navigation systems in vehicles and smartphones, assistive technology and other eyes-free applications. The aim of this technology is to deliver location-based information to support navigation through sound. In this paper a comparison of two sonification methods for navigation and obstacle avoidance is presented. These methods were initially developed during a sonification hack day that was ran during the Interactive Sonification (ISon) workshop 2013. In order to allow the formal comparison of methods, we followed a reproducible sonification approach using a set of guidelines provided by SonEX (Sonification Evaluation eXchange). SonEX is a community-based environment that enables the definition and evaluation of standardized tasks, supporting open science standards and reproducible research. In order to allow for reproducible research, the system has been made publicly available.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130629688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient modal sound synthesis on GPUs","authors":"Dominik Rausch, B. Hentschel, T. Kuhlen","doi":"10.1109/SIVE.2014.7006284","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006284","url":null,"abstract":"Modal sound synthesis is a useful method to interactively generate sounds for Virtual Environments. Forces acting on objects excite modes, which then have to be accumulated to generate the output sound. Due to the high audio sampling rate, algorithms using the CPU typically can handle only a few actively sounding objects. Additionally, force excitation should be applied at a high sampling rate. We present different algorithms to compute the synthesized sound using a GPU, and compare them to CPU implementations. The GPU algorithms shows a significantly higher performance, and allows many sounding objects simultaneously.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129373777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The potentials for spatial audio to convey information in virtual environments","authors":"K. McMullen","doi":"10.1109/SIVE.2014.7006287","DOIUrl":"https://doi.org/10.1109/SIVE.2014.7006287","url":null,"abstract":"Digital sounds can be processed such that auditory cues are created that convey spatial location within a virtual auditory environment (VAE). Only in recent years has technology advanced such that audio can be processed in real-time as a user navigates an environment. We must first consider the perceptual challenges faced by 3D sound rendering, before we can realize its full potential. Now more than ever before, large quantities of data are created and collected at an increasing rate. Research in human perception has demonstrated that humans are capable of differentiating among many sounds. One potential application is to create an auditory virtual world in which data is represented as various sounds. Such a representation could aid data analysts in detecting patterns in data, decreasing cognitive load, and performing their jobs faster. Although this is one application, the full extent of the manner in which 3D sounds can be used to augment virtual environments has yet to be discovered.","PeriodicalId":173215,"journal":{"name":"2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}