Ren-ge Huang, Yin Li, Da Kang, Yujie Chen, Chunyan Yu, Xiu Wang
{"title":"情感约束下的旋律生成","authors":"Ren-ge Huang, Yin Li, Da Kang, Yujie Chen, Chunyan Yu, Xiu Wang","doi":"10.1145/3501409.3501691","DOIUrl":null,"url":null,"abstract":"At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation.","PeriodicalId":191106,"journal":{"name":"Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Melody Generation with Emotion Constraint\",\"authors\":\"Ren-ge Huang, Yin Li, Da Kang, Yujie Chen, Chunyan Yu, Xiu Wang\",\"doi\":\"10.1145/3501409.3501691\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation.\",\"PeriodicalId\":191106,\"journal\":{\"name\":\"Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3501409.3501691\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3501409.3501691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation.