J. Janer, E. Gómez, A. Martorell, M. Miron, B. Wit
{"title":"沉浸式管弦乐队:管弦乐VR内容的音频处理","authors":"J. Janer, E. Gómez, A. Martorell, M. Miron, B. Wit","doi":"10.1109/VS-GAMES.2016.7590352","DOIUrl":null,"url":null,"abstract":"This paper combines Audio Signal Processing and Virtual Reality (VR) content to create novel immersive experiences for orchestral music audiences. In VR, the auralization of sound sources of recorded live content remains still a rather unexplored topic. We aim to build a multimodal experience, where visual and audio cues bring a sonic augmentation of the real scene. In the particular scenario of orchestral music content, our goal is to acoustically zoom on a particular instrument when the VR user stares at it. This work aims to improve the learning aspects of music listening, either for education or for personal enrichment. We use audio signal processing to separate different sound sources (instruments) in a acoustic scene (orchestral music recording). Given the signals captured by multiple microphones and the musical score of the piece, our system is able to isolate the different instruments. From the processed separated tracks, we use a binaural rendering technique to emphasize a give instrument. For these experiments we used original content from top European orchestras.","PeriodicalId":239485,"journal":{"name":"2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Immersive Orchestras: Audio Processing for Orchestral Music VR Content\",\"authors\":\"J. Janer, E. Gómez, A. Martorell, M. Miron, B. Wit\",\"doi\":\"10.1109/VS-GAMES.2016.7590352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper combines Audio Signal Processing and Virtual Reality (VR) content to create novel immersive experiences for orchestral music audiences. In VR, the auralization of sound sources of recorded live content remains still a rather unexplored topic. We aim to build a multimodal experience, where visual and audio cues bring a sonic augmentation of the real scene. In the particular scenario of orchestral music content, our goal is to acoustically zoom on a particular instrument when the VR user stares at it. This work aims to improve the learning aspects of music listening, either for education or for personal enrichment. We use audio signal processing to separate different sound sources (instruments) in a acoustic scene (orchestral music recording). Given the signals captured by multiple microphones and the musical score of the piece, our system is able to isolate the different instruments. From the processed separated tracks, we use a binaural rendering technique to emphasize a give instrument. For these experiments we used original content from top European orchestras.\",\"PeriodicalId\":239485,\"journal\":{\"name\":\"2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VS-GAMES.2016.7590352\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VS-GAMES.2016.7590352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Immersive Orchestras: Audio Processing for Orchestral Music VR Content
This paper combines Audio Signal Processing and Virtual Reality (VR) content to create novel immersive experiences for orchestral music audiences. In VR, the auralization of sound sources of recorded live content remains still a rather unexplored topic. We aim to build a multimodal experience, where visual and audio cues bring a sonic augmentation of the real scene. In the particular scenario of orchestral music content, our goal is to acoustically zoom on a particular instrument when the VR user stares at it. This work aims to improve the learning aspects of music listening, either for education or for personal enrichment. We use audio signal processing to separate different sound sources (instruments) in a acoustic scene (orchestral music recording). Given the signals captured by multiple microphones and the musical score of the piece, our system is able to isolate the different instruments. From the processed separated tracks, we use a binaural rendering technique to emphasize a give instrument. For these experiments we used original content from top European orchestras.