{"title":"学习产生情感音乐与音乐结构特征相关","authors":"Lin Ma, Wei Zhong, Xin Ma, Long Ye, Qin Zhang","doi":"10.1049/ccs2.12037","DOIUrl":null,"url":null,"abstract":"<p>Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12037","citationCount":"1","resultStr":"{\"title\":\"Learning to generate emotional music correlated with music structure features\",\"authors\":\"Lin Ma, Wei Zhong, Xin Ma, Long Ye, Qin Zhang\",\"doi\":\"10.1049/ccs2.12037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2022-02-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12037\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12037\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Learning to generate emotional music correlated with music structure features
Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.