Music sparse decomposition onto a MIDI dictionary of musical words and its application to music mood classification

Boyang Gao, E. Dellandréa, Liming Chen
{"title":"Music sparse decomposition onto a MIDI dictionary of musical words and its application to music mood classification","authors":"Boyang Gao, E. Dellandréa, Liming Chen","doi":"10.1109/CBMI.2012.6269798","DOIUrl":null,"url":null,"abstract":"Most of the automated music analysis methods available in the literature rely on the representation of the music through a set of low-level audio features related to temporal and frequential properties. Identifying high-level concepts, such as music mood, from this \"black-box\" representation is particularly challenging. Therefore we present in this paper a novel music representation that allows gaining an in-depth understanding of the music structure. Its principle is to decompose sparsely the music into a basis of elementary audio elements, called musical words, which represent the notes played by various instruments generated through a MIDI synthesizer. From this representation, a music feature is also proposed to allow automatic music classification. Experiments driven on two music datasets have shown the effectiveness of this approach to represent accurately music signals and to allow efficient classification for the complex problem of music mood classification.","PeriodicalId":120769,"journal":{"name":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMI.2012.6269798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Most of the automated music analysis methods available in the literature rely on the representation of the music through a set of low-level audio features related to temporal and frequential properties. Identifying high-level concepts, such as music mood, from this "black-box" representation is particularly challenging. Therefore we present in this paper a novel music representation that allows gaining an in-depth understanding of the music structure. Its principle is to decompose sparsely the music into a basis of elementary audio elements, called musical words, which represent the notes played by various instruments generated through a MIDI synthesizer. From this representation, a music feature is also proposed to allow automatic music classification. Experiments driven on two music datasets have shown the effectiveness of this approach to represent accurately music signals and to allow efficient classification for the complex problem of music mood classification.
MIDI音乐词字典的音乐稀疏分解及其在音乐情绪分类中的应用
文献中可用的大多数自动音乐分析方法都依赖于通过一组与时间和频率属性相关的低级音频特征来表示音乐。从这种“黑盒”表示中识别高级概念(如音乐情绪)尤其具有挑战性。因此,我们在本文中提出了一种新颖的音乐表示,可以深入了解音乐结构。它的原理是将音乐稀疏地分解为基本音频元素的基础,称为音乐词,它代表通过MIDI合成器生成的各种乐器演奏的音符。从这个表示中,还提出了一个音乐特征来允许自动音乐分类。在两个音乐数据集上进行的实验表明,这种方法可以准确地表示音乐信号,并对复杂的音乐情绪分类问题进行有效的分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信