{"title":"Lightweight Scene-aware Rain Sound Simulation for Interactive Virtual Environments","authors":"Haonan Cheng, Shiguang Liu, Jiawan Zhang","doi":"10.1109/VR55154.2023.00038","DOIUrl":null,"url":null,"abstract":"We present a lightweight and efficient rain sound synthesis method for interactive virtual environments. Existing rain sound simulation methods require massive superposition of scene-specific precomputed rain sounds, which is excessive memory consumption for virtual reality systems (e.g. video games) with limited audio memory budgets. Facing this issue, we reduce the audio memory budgets by introducing a lightweight rain sound synthesis method which is only based on eight physically-inspired basic rain sounds. First, in order to generate sufficiently various rain sounds with limited sound data, we propose an exponential moving average based frequency domain additive (FDA) synthesis method to extend and modify the pre-computed basic rain sounds. Each rain sound is generated in the frequency domain before conversion back to the time domain, allowing us to extend the rain sound which is free of temporal distortions and discontinuities. Next, we introduce an efficient binaural rendering method to simulate the 3D perception that coheres with the visual scene based on a set of Near-Field Transfer Functions (NFTF). Various results demonstrate that the proposed method drastically decreases the memory cost (77 times compressed) and overcomes the limitations of existing methods in terms of interaction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR55154.2023.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present a lightweight and efficient rain sound synthesis method for interactive virtual environments. Existing rain sound simulation methods require massive superposition of scene-specific precomputed rain sounds, which is excessive memory consumption for virtual reality systems (e.g. video games) with limited audio memory budgets. Facing this issue, we reduce the audio memory budgets by introducing a lightweight rain sound synthesis method which is only based on eight physically-inspired basic rain sounds. First, in order to generate sufficiently various rain sounds with limited sound data, we propose an exponential moving average based frequency domain additive (FDA) synthesis method to extend and modify the pre-computed basic rain sounds. Each rain sound is generated in the frequency domain before conversion back to the time domain, allowing us to extend the rain sound which is free of temporal distortions and discontinuities. Next, we introduce an efficient binaural rendering method to simulate the 3D perception that coheres with the visual scene based on a set of Near-Field Transfer Functions (NFTF). Various results demonstrate that the proposed method drastically decreases the memory cost (77 times compressed) and overcomes the limitations of existing methods in terms of interaction.