{"title":"基于多数据融合的多模态音乐情感识别方法","authors":"Fanguang Zeng","doi":"10.1504/ijart.2023.133662","DOIUrl":null,"url":null,"abstract":"In order to overcome the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multiple data fusion is proposed. The multi-modal music emotion is decomposed by the non-negative matrix decomposition method to obtain the multi-modal data of audio and lyrics, and extract the audio modal emotional features and text modal emotional features respectively. After the multi-modal data of the two modal emotional features are weighted and fused through the linear prediction residual, the normalised multi-modal data is used as the training sample and input into the classification model based on support vector machine, so as to identify multimodal music emotion. The experimental results show that the proposed method takes the shortest time for multimodal music emotion recognition and improves the recognition accuracy.","PeriodicalId":38696,"journal":{"name":"International Journal of Arts and Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.2000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal music emotion recognition method based on multi data fusion\",\"authors\":\"Fanguang Zeng\",\"doi\":\"10.1504/ijart.2023.133662\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to overcome the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multiple data fusion is proposed. The multi-modal music emotion is decomposed by the non-negative matrix decomposition method to obtain the multi-modal data of audio and lyrics, and extract the audio modal emotional features and text modal emotional features respectively. After the multi-modal data of the two modal emotional features are weighted and fused through the linear prediction residual, the normalised multi-modal data is used as the training sample and input into the classification model based on support vector machine, so as to identify multimodal music emotion. The experimental results show that the proposed method takes the shortest time for multimodal music emotion recognition and improves the recognition accuracy.\",\"PeriodicalId\":38696,\"journal\":{\"name\":\"International Journal of Arts and Technology\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.2000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Arts and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1504/ijart.2023.133662\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Arts and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/ijart.2023.133662","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Multimodal music emotion recognition method based on multi data fusion
In order to overcome the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multiple data fusion is proposed. The multi-modal music emotion is decomposed by the non-negative matrix decomposition method to obtain the multi-modal data of audio and lyrics, and extract the audio modal emotional features and text modal emotional features respectively. After the multi-modal data of the two modal emotional features are weighted and fused through the linear prediction residual, the normalised multi-modal data is used as the training sample and input into the classification model based on support vector machine, so as to identify multimodal music emotion. The experimental results show that the proposed method takes the shortest time for multimodal music emotion recognition and improves the recognition accuracy.
期刊介绍:
IJART addresses arts and new technologies, highlighting computational art. With evolution of intelligent devices, sensors and ambient intelligent/ubiquitous systems, projects are exploring the design of intelligent artistic artefacts. Ambient intelligence supports the vision that technology becomes invisible, embedded in our natural surroundings, present whenever needed, attuned to all senses, adaptive to users/context and autonomously acting, bringing art to ordinary people, offering artists creative tools to extend the grammar of the traditional arts. Information environments will be the major drivers of culture.