{"title":"基于协同训练的多调式音乐情绪分类","authors":"Y. Zhao, Deshun Yang, Xiaoou Chen","doi":"10.1109/CISE.2010.5677056","DOIUrl":null,"url":null,"abstract":"In this paper, we present a new approach to content-based music mood classification. Music, especially song, is born with multi-modality natures. But current studies are mainly focus on its audio modality, and the classification capability is not good enough. In this paper we use three modalities which are audio, lyric and MIDI. After extracting features from these three modalities respectively, we get three feature sets. We devise and compare three variants of standard co-training algorithm. The results show that these methods can effectively improve the classification accuracy.","PeriodicalId":232832,"journal":{"name":"2010 International Conference on Computational Intelligence and Software Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Multi-Modal Music Mood Classification Using Co-Training\",\"authors\":\"Y. Zhao, Deshun Yang, Xiaoou Chen\",\"doi\":\"10.1109/CISE.2010.5677056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a new approach to content-based music mood classification. Music, especially song, is born with multi-modality natures. But current studies are mainly focus on its audio modality, and the classification capability is not good enough. In this paper we use three modalities which are audio, lyric and MIDI. After extracting features from these three modalities respectively, we get three feature sets. We devise and compare three variants of standard co-training algorithm. The results show that these methods can effectively improve the classification accuracy.\",\"PeriodicalId\":232832,\"journal\":{\"name\":\"2010 International Conference on Computational Intelligence and Software Engineering\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 International Conference on Computational Intelligence and Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CISE.2010.5677056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on Computational Intelligence and Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISE.2010.5677056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-Modal Music Mood Classification Using Co-Training
In this paper, we present a new approach to content-based music mood classification. Music, especially song, is born with multi-modality natures. But current studies are mainly focus on its audio modality, and the classification capability is not good enough. In this paper we use three modalities which are audio, lyric and MIDI. After extracting features from these three modalities respectively, we get three feature sets. We devise and compare three variants of standard co-training algorithm. The results show that these methods can effectively improve the classification accuracy.