{"title":"基于多通道增强图卷积网络的器乐描述情感分析","authors":"Fangge Lv , Huasang Wang","doi":"10.1016/j.aej.2025.03.088","DOIUrl":null,"url":null,"abstract":"<div><div>Traditional single-channel feature extraction methods face challenges in instrumental music sentiment analysis, primarily due to their reliance on a single type of dependency, which overlooks the complex relationships between musical elements and emotions. While graph convolutional network (GCN)-based approaches show potential, they still struggle with aggregating both musical structure information and emotional details, especially in instrumental music without lyrics, where misinterpretation of emotional features is common. Moreover, insufficient domain knowledge hinders the model’s ability to capture subtle differences in musical terminology, further reducing sentiment analysis accuracy. To address these challenges, we propose a sentiment analysis graph neural network called KSD-GCN, where the sentiment-enhanced syntactic graph convolution module enriches the dependency graph by integrating external sentiment knowledge, thereby improving the model’s ability to capture emotions. The dependency relation embedding module focuses on capturing syntactic dependency information within the sentence. Additionally, we introduce a multi-layer interactive attention mechanism that effectively integrates syntactic, dependency, and semantic information. Through this interaction, the model can finely capture the sentiment and syntactic structure of the sentence at different layers, significantly improving the accuracy of aspect-based sentiment analysis. Experimental results show that, on multiple datasets, the model outperforms baseline models across several performance metrics.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"124 ","pages":"Pages 527-539"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-channel enhanced graph convolutional network for sentiment analysis on instrumental music descriptions\",\"authors\":\"Fangge Lv , Huasang Wang\",\"doi\":\"10.1016/j.aej.2025.03.088\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Traditional single-channel feature extraction methods face challenges in instrumental music sentiment analysis, primarily due to their reliance on a single type of dependency, which overlooks the complex relationships between musical elements and emotions. While graph convolutional network (GCN)-based approaches show potential, they still struggle with aggregating both musical structure information and emotional details, especially in instrumental music without lyrics, where misinterpretation of emotional features is common. Moreover, insufficient domain knowledge hinders the model’s ability to capture subtle differences in musical terminology, further reducing sentiment analysis accuracy. To address these challenges, we propose a sentiment analysis graph neural network called KSD-GCN, where the sentiment-enhanced syntactic graph convolution module enriches the dependency graph by integrating external sentiment knowledge, thereby improving the model’s ability to capture emotions. The dependency relation embedding module focuses on capturing syntactic dependency information within the sentence. Additionally, we introduce a multi-layer interactive attention mechanism that effectively integrates syntactic, dependency, and semantic information. Through this interaction, the model can finely capture the sentiment and syntactic structure of the sentence at different layers, significantly improving the accuracy of aspect-based sentiment analysis. Experimental results show that, on multiple datasets, the model outperforms baseline models across several performance metrics.</div></div>\",\"PeriodicalId\":7484,\"journal\":{\"name\":\"alexandria engineering journal\",\"volume\":\"124 \",\"pages\":\"Pages 527-539\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"alexandria engineering journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1110016825003965\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825003965","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
Multi-channel enhanced graph convolutional network for sentiment analysis on instrumental music descriptions
Traditional single-channel feature extraction methods face challenges in instrumental music sentiment analysis, primarily due to their reliance on a single type of dependency, which overlooks the complex relationships between musical elements and emotions. While graph convolutional network (GCN)-based approaches show potential, they still struggle with aggregating both musical structure information and emotional details, especially in instrumental music without lyrics, where misinterpretation of emotional features is common. Moreover, insufficient domain knowledge hinders the model’s ability to capture subtle differences in musical terminology, further reducing sentiment analysis accuracy. To address these challenges, we propose a sentiment analysis graph neural network called KSD-GCN, where the sentiment-enhanced syntactic graph convolution module enriches the dependency graph by integrating external sentiment knowledge, thereby improving the model’s ability to capture emotions. The dependency relation embedding module focuses on capturing syntactic dependency information within the sentence. Additionally, we introduce a multi-layer interactive attention mechanism that effectively integrates syntactic, dependency, and semantic information. Through this interaction, the model can finely capture the sentiment and syntactic structure of the sentence at different layers, significantly improving the accuracy of aspect-based sentiment analysis. Experimental results show that, on multiple datasets, the model outperforms baseline models across several performance metrics.
期刊介绍:
Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification:
• Mechanical, Production, Marine and Textile Engineering
• Electrical Engineering, Computer Science and Nuclear Engineering
• Civil and Architecture Engineering
• Chemical Engineering and Applied Sciences
• Environmental Engineering