Gustavo Amaral Costa dos Santos, A. Baffa, Jean-Pierre Briot, Bruno Feij'o, Antonio Luz Furtado
{"title":"基于深度学习Transformer模型的游戏自适应音乐生成架构","authors":"Gustavo Amaral Costa dos Santos, A. Baffa, Jean-Pierre Briot, Bruno Feij'o, Antonio Luz Furtado","doi":"10.1109/SBGAMES56371.2022.9961081","DOIUrl":null,"url":null,"abstract":"This paper presents an architecture for generating music for video games based on the Transformer deep learning model. Our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical style. The system generates various musical layers, following the standard layering strategy currently used by composers designing video game music. To adapt the music generated to the game play and to the player(s) situation, we are using an arousal-valence model of emotions, in order to control the selection of musical layers. We discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.","PeriodicalId":154269,"journal":{"name":"2022 21st Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An adaptive music generation architecture for games based on the deep learning Transformer model\",\"authors\":\"Gustavo Amaral Costa dos Santos, A. Baffa, Jean-Pierre Briot, Bruno Feij'o, Antonio Luz Furtado\",\"doi\":\"10.1109/SBGAMES56371.2022.9961081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an architecture for generating music for video games based on the Transformer deep learning model. Our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical style. The system generates various musical layers, following the standard layering strategy currently used by composers designing video game music. To adapt the music generated to the game play and to the player(s) situation, we are using an arousal-valence model of emotions, in order to control the selection of musical layers. We discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.\",\"PeriodicalId\":154269,\"journal\":{\"name\":\"2022 21st Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 21st Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBGAMES56371.2022.9961081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBGAMES56371.2022.9961081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An adaptive music generation architecture for games based on the deep learning Transformer model
This paper presents an architecture for generating music for video games based on the Transformer deep learning model. Our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical style. The system generates various musical layers, following the standard layering strategy currently used by composers designing video game music. To adapt the music generated to the game play and to the player(s) situation, we are using an arousal-valence model of emotions, in order to control the selection of musical layers. We discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.