Kaiwen Xue, Zhixuan Liu, Jiaying Li, Xiaoqiang Ji, Huihuan Qian
{"title":"SongBot: An Interactive Music Generation Robotic System for Non-musicians Learning from A Song","authors":"Kaiwen Xue, Zhixuan Liu, Jiaying Li, Xiaoqiang Ji, Huihuan Qian","doi":"10.1109/RCAR52367.2021.9517454","DOIUrl":null,"url":null,"abstract":"This paper proposes an interactive system for the non-musician learners to get inspired from a song. Differing from complex models of deep learning or simple Markov models sparse of music inter-features, in this research, we unify the composing of a song in a general architecture with music theory, and thus provide a much more understandable view of the music generation for non-musician learners. The proposed model focuses on extracting the extant feature from a target song and recreating different phrases with the representing probabilistic graph underlying the target song based on the relationship among notes in a phrase. Furthermore, an interactive interface between the users and the proposed system is built with a tunable parameter for them to be involved in the music generation and creating procedure. This procedure provides practical experience in aiding the non-musicians to understand and learn from composing a song. Approximately 700 samples of preferences questionnaire survey about the generated music and original music and more than 3000 samples for interactive preferences voting for the tunable parameter have been collected. Quantities of experiments have proved the validation of the proposed system.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RCAR52367.2021.9517454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes an interactive system for the non-musician learners to get inspired from a song. Differing from complex models of deep learning or simple Markov models sparse of music inter-features, in this research, we unify the composing of a song in a general architecture with music theory, and thus provide a much more understandable view of the music generation for non-musician learners. The proposed model focuses on extracting the extant feature from a target song and recreating different phrases with the representing probabilistic graph underlying the target song based on the relationship among notes in a phrase. Furthermore, an interactive interface between the users and the proposed system is built with a tunable parameter for them to be involved in the music generation and creating procedure. This procedure provides practical experience in aiding the non-musicians to understand and learn from composing a song. Approximately 700 samples of preferences questionnaire survey about the generated music and original music and more than 3000 samples for interactive preferences voting for the tunable parameter have been collected. Quantities of experiments have proved the validation of the proposed system.