{"title":"基于自适应网络的模糊音乐情感识别","authors":"Paulo Sergio da Conceição Moreira, D. Tsunoda","doi":"10.1080/09298215.2021.1977339","DOIUrl":null,"url":null,"abstract":"This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"342 - 354"},"PeriodicalIF":1.1000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Recognition of emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS)\",\"authors\":\"Paulo Sergio da Conceição Moreira, D. Tsunoda\",\"doi\":\"10.1080/09298215.2021.1977339\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.\",\"PeriodicalId\":16553,\"journal\":{\"name\":\"Journal of New Music Research\",\"volume\":\"50 1\",\"pages\":\"342 - 354\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2021-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of New Music Research\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/09298215.2021.1977339\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of New Music Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/09298215.2021.1977339","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Recognition of emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS)
This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.
期刊介绍:
The Journal of New Music Research (JNMR) publishes material which increases our understanding of music and musical processes by systematic, scientific and technological means. Research published in the journal is innovative, empirically grounded and often, but not exclusively, uses quantitative methods. Articles are both musically relevant and scientifically rigorous, giving full technical details. No bounds are placed on the music or musical behaviours at issue: popular music, music of diverse cultures and the canon of western classical music are all within the Journal’s scope. Articles deal with theory, analysis, composition, performance, uses of music, instruments and other music technologies. The Journal was founded in 1972 with the original title Interface to reflect its interdisciplinary nature, drawing on musicology (including music theory), computer science, psychology, acoustics, philosophy, and other disciplines.