Learning to generate emotional music correlated with music structure features

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Lin Ma, Wei Zhong, Xin Ma, Long Ye, Qin Zhang
{"title":"Learning to generate emotional music correlated with music structure features","authors":"Lin Ma,&nbsp;Wei Zhong,&nbsp;Xin Ma,&nbsp;Long Ye,&nbsp;Qin Zhang","doi":"10.1049/ccs2.12037","DOIUrl":null,"url":null,"abstract":"<p>Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12037","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.

Abstract Image

学习产生情感音乐与音乐结构特征相关
音乐可以看作是一种表达内心情感的艺术。然而,现有的音乐生成网络大多忽略了对其情感表达的分析。在本文中,我们提出根据特定的情感来合成音乐,并将音乐的内在结构特征融入到生成过程中。具体来说,我们将情感标签与音乐结构特征一起嵌入作为条件输入,然后研究GRU网络生成情感音乐。除了生成器之外,我们还设计了一种新的感知优化的情感分类模型,旨在促进生成的音乐更接近真实音乐的情感表达。为了验证所提出的框架的有效性,进行了主观和客观实验,以验证我们的方法可以产生与特定情感和音乐结构相关的情感音乐。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信