{"title":"Augmenting virtual-reality environments with social-signal based music content","authors":"Ioannis Karydis, I. Deliyannis, A. Floros","doi":"10.1109/ICDSP.2011.6004944","DOIUrl":null,"url":null,"abstract":"Virtual environments and computer games incorporate music in order to enrich the audiovisual experience and further immerse users. Selecting musical content during design-time can have a controversial result based on the preferences of the users involved, while limiting the interactivity of the environment, affecting thus the effectiveness of immersion. In this work, we introduce a framework for the selection and incorporation of user preferable musical data into interactive virtual environments and games. The framework designates guidelines for both design and run-time annotation of scenes. Consequently, personal music preferences collected through local repositories or social networks can be processed, analysed, categorised and prepared for direct incorporation into virtual environments. This permits automated audio selection based on scene characteristics and scene characters' interaction, enriching or replacing the default designer choices. Proof-of-concept is given via development of a web-service that provides a video game with a dynamic interactive audio content based on predefined video game scene annotation and user musical preferences recorded in social network services.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 17th International Conference on Digital Signal Processing (DSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2011.6004944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Virtual environments and computer games incorporate music in order to enrich the audiovisual experience and further immerse users. Selecting musical content during design-time can have a controversial result based on the preferences of the users involved, while limiting the interactivity of the environment, affecting thus the effectiveness of immersion. In this work, we introduce a framework for the selection and incorporation of user preferable musical data into interactive virtual environments and games. The framework designates guidelines for both design and run-time annotation of scenes. Consequently, personal music preferences collected through local repositories or social networks can be processed, analysed, categorised and prepared for direct incorporation into virtual environments. This permits automated audio selection based on scene characteristics and scene characters' interaction, enriching or replacing the default designer choices. Proof-of-concept is given via development of a web-service that provides a video game with a dynamic interactive audio content based on predefined video game scene annotation and user musical preferences recorded in social network services.