音乐中体裁和情感预测之间的相互作用:Emotify数据集的研究

Leonardo Vilela de Abreu Silva Pereira, T. Tavares
{"title":"音乐中体裁和情感预测之间的相互作用:Emotify数据集的研究","authors":"Leonardo Vilela de Abreu Silva Pereira, T. Tavares","doi":"10.5753/sbcm.2021.19421","DOIUrl":null,"url":null,"abstract":"Automatic classification problems are common in the music information retrieval domain. Among those we can find the automatic identification of music genre and music mood as frequently approached problems. The labels related to genre and mood are both generated by humans, according to subjective experiences related to each individual’s growth and development, that is, each person attributes different meanings to genre and mood labels. However, because both genre and mood arise from a similar process related to the social surroundings of an individual, we hypothesize that they are somehow related. In this study, we present experiments performed in the Emotify dataset, which comprises audio data and genre and mood-related tags for several pieces. We show that we can predict genre from audio data with a high accuracy; however, we consistently obtained low accuracy to predict mood tags. Additionally, we tried to use mood tags to predict genre, and also obtained a low accuracy. An analysis of the feature space reveals that our features are more related to genre than to mood, which explains the results from a linear algebra viewpoint. However, we still cannot find a music-related explanation to this difference.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An interplay between genre and emotion prediction in music: a study in the Emotify dataset\",\"authors\":\"Leonardo Vilela de Abreu Silva Pereira, T. Tavares\",\"doi\":\"10.5753/sbcm.2021.19421\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic classification problems are common in the music information retrieval domain. Among those we can find the automatic identification of music genre and music mood as frequently approached problems. The labels related to genre and mood are both generated by humans, according to subjective experiences related to each individual’s growth and development, that is, each person attributes different meanings to genre and mood labels. However, because both genre and mood arise from a similar process related to the social surroundings of an individual, we hypothesize that they are somehow related. In this study, we present experiments performed in the Emotify dataset, which comprises audio data and genre and mood-related tags for several pieces. We show that we can predict genre from audio data with a high accuracy; however, we consistently obtained low accuracy to predict mood tags. Additionally, we tried to use mood tags to predict genre, and also obtained a low accuracy. An analysis of the feature space reveals that our features are more related to genre than to mood, which explains the results from a linear algebra viewpoint. However, we still cannot find a music-related explanation to this difference.\",\"PeriodicalId\":292360,\"journal\":{\"name\":\"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)\",\"volume\":\"130 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5753/sbcm.2021.19421\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/sbcm.2021.19421","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自动分类问题是音乐信息检索领域的常见问题。在这些问题中,我们可以发现音乐类型和音乐情绪的自动识别是经常遇到的问题。与类型和情绪相关的标签都是人类根据与每个个体的成长和发展相关的主观经验产生的,即每个人赋予类型和情绪标签不同的含义。然而,由于体裁和情绪都产生于与个体的社会环境相关的类似过程,我们假设它们在某种程度上是相关的。在本研究中,我们展示了在Emotify数据集中进行的实验,该数据集包括音频数据以及几件作品的类型和情绪相关标签。我们可以通过音频数据以较高的准确率预测游戏类型;然而,我们预测情绪标签的准确率一直很低。此外,我们尝试使用情绪标签来预测体裁,也获得了较低的准确率。对特征空间的分析表明,我们的特征与类型的关系比与情绪的关系更大,这从线性代数的角度解释了结果。然而,我们仍然无法找到与音乐相关的解释来解释这种差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An interplay between genre and emotion prediction in music: a study in the Emotify dataset
Automatic classification problems are common in the music information retrieval domain. Among those we can find the automatic identification of music genre and music mood as frequently approached problems. The labels related to genre and mood are both generated by humans, according to subjective experiences related to each individual’s growth and development, that is, each person attributes different meanings to genre and mood labels. However, because both genre and mood arise from a similar process related to the social surroundings of an individual, we hypothesize that they are somehow related. In this study, we present experiments performed in the Emotify dataset, which comprises audio data and genre and mood-related tags for several pieces. We show that we can predict genre from audio data with a high accuracy; however, we consistently obtained low accuracy to predict mood tags. Additionally, we tried to use mood tags to predict genre, and also obtained a low accuracy. An analysis of the feature space reveals that our features are more related to genre than to mood, which explains the results from a linear algebra viewpoint. However, we still cannot find a music-related explanation to this difference.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信