{"title":"Exploring Data Augmentation to Improve Music Genre Classification with ConvNets","authors":"R. L. Aguiar, Yandre M. G. Costa, C. Silla","doi":"10.1109/IJCNN.2018.8489166","DOIUrl":null,"url":null,"abstract":"In this work we address the automatic music genre classification as a pattern recognition task. The content of the music pieces were handled in the visual domain, using spectrograms created from the audio signal. This kind of image has been successfully used in this task since 2011 by extracting handcrafted features based on texture, since it is the main visual attribute found in spectrograms. In this work, the patterns were described by representation learning obtained with the use of convolutional neural network (CNN). CNN is a deep learning architecture and it has been widely used in the pattern recognition literature. Overfitting is a recurrent problem when a classification task is addressed by using CNN, it may occur due to the lack of training samples and/or due to the high dimensionality of the space. To increase the generalization capability we propose to explore data augmentation techniques. In this work, we have carefully selected strategies of data augmentation that are suitable for this kind of application, which are: adding noise, pitch shifting, loudness variation and time stretching. Experiments were conducted on the Latin Music Database (LMD), and the best obtained accuracy overcame the state of the art considering approaches based only in CNN.","PeriodicalId":134599,"journal":{"name":"IEEE International Joint Conference on Neural Network","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Joint Conference on Neural Network","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2018.8489166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
In this work we address the automatic music genre classification as a pattern recognition task. The content of the music pieces were handled in the visual domain, using spectrograms created from the audio signal. This kind of image has been successfully used in this task since 2011 by extracting handcrafted features based on texture, since it is the main visual attribute found in spectrograms. In this work, the patterns were described by representation learning obtained with the use of convolutional neural network (CNN). CNN is a deep learning architecture and it has been widely used in the pattern recognition literature. Overfitting is a recurrent problem when a classification task is addressed by using CNN, it may occur due to the lack of training samples and/or due to the high dimensionality of the space. To increase the generalization capability we propose to explore data augmentation techniques. In this work, we have carefully selected strategies of data augmentation that are suitable for this kind of application, which are: adding noise, pitch shifting, loudness variation and time stretching. Experiments were conducted on the Latin Music Database (LMD), and the best obtained accuracy overcame the state of the art considering approaches based only in CNN.