社会链接下基于标签语义引导的多模态情感分析

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yun Liu , Xiaoming Zhang , Bo Zhang , Guofeng He , Ke Zhou , Zhoujun Li
{"title":"社会链接下基于标签语义引导的多模态情感分析","authors":"Yun Liu ,&nbsp;Xiaoming Zhang ,&nbsp;Bo Zhang ,&nbsp;Guofeng He ,&nbsp;Ke Zhou ,&nbsp;Zhoujun Li","doi":"10.1016/j.patcog.2025.112277","DOIUrl":null,"url":null,"abstract":"<div><div>The proliferation of social media platforms has led to an explosion of multimodal data that encapsulates rich emotional content. Effectively integrating heterogeneous modalities to predict sentiment polarity remains a critical challenge. Existing approaches often underexploit sentiment prior knowledge and largely ignore the impact of social links on emotional trends, resulting in suboptimal performance. To address these limitations, we propose a novel multimodal sentiment analysis framework, i.e., Label Semantic Guidance under Social Links (LSGSL). LSGSL enhances sentiment reasoning by jointly modeling visual-textual features and the social relationships between users. Specifically, it encodes social links as a graph structure to facilitate sentiment-aware interactions across modalities, and introduces a novel use of sentiment labels-not merely as classification targets, but as semantic embeddings that guide the fusion and reasoning processes. Furthermore, LSGSL adopts a multi-task learning paradigm that jointly optimizes three objectives: image-text contrastive loss, sentiment-guided semantic similarity loss, and sentiment polarity classification loss. Extensive experiments on three widely-used benchmark datasets demonstrate that LSGSL consistently outperforms state-of-the-art methods, offering new insights into the role of social context and semantic label guidance in multimodal sentiment analysis.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"171 ","pages":"Article 112277"},"PeriodicalIF":7.6000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal sentiment analysis based on label semantic guidance under social links\",\"authors\":\"Yun Liu ,&nbsp;Xiaoming Zhang ,&nbsp;Bo Zhang ,&nbsp;Guofeng He ,&nbsp;Ke Zhou ,&nbsp;Zhoujun Li\",\"doi\":\"10.1016/j.patcog.2025.112277\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The proliferation of social media platforms has led to an explosion of multimodal data that encapsulates rich emotional content. Effectively integrating heterogeneous modalities to predict sentiment polarity remains a critical challenge. Existing approaches often underexploit sentiment prior knowledge and largely ignore the impact of social links on emotional trends, resulting in suboptimal performance. To address these limitations, we propose a novel multimodal sentiment analysis framework, i.e., Label Semantic Guidance under Social Links (LSGSL). LSGSL enhances sentiment reasoning by jointly modeling visual-textual features and the social relationships between users. Specifically, it encodes social links as a graph structure to facilitate sentiment-aware interactions across modalities, and introduces a novel use of sentiment labels-not merely as classification targets, but as semantic embeddings that guide the fusion and reasoning processes. Furthermore, LSGSL adopts a multi-task learning paradigm that jointly optimizes three objectives: image-text contrastive loss, sentiment-guided semantic similarity loss, and sentiment polarity classification loss. Extensive experiments on three widely-used benchmark datasets demonstrate that LSGSL consistently outperforms state-of-the-art methods, offering new insights into the role of social context and semantic label guidance in multimodal sentiment analysis.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"171 \",\"pages\":\"Article 112277\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320325009380\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325009380","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

社交媒体平台的激增导致了封装了丰富情感内容的多模式数据的爆炸式增长。有效地整合异质模式来预测情绪极性仍然是一个关键的挑战。现有的方法往往没有充分利用情绪先验知识,很大程度上忽略了社会联系对情绪趋势的影响,导致表现不佳。为了解决这些限制,我们提出了一种新的多模态情感分析框架,即社会链接下的标签语义指导(LSGSL)。LSGSL通过联合建模视觉文本特征和用户之间的社会关系来增强情感推理。具体来说,它将社会链接编码为一个图形结构,以促进跨模式的情感感知交互,并引入了一种新的情感标签用法——不仅作为分类目标,而且作为引导融合和推理过程的语义嵌入。此外,LSGSL采用多任务学习范式,共同优化了三个目标:图像-文本对比损失、情感引导的语义相似度损失和情感极性分类损失。在三个广泛使用的基准数据集上进行的大量实验表明,LSGSL始终优于最先进的方法,为社会语境和语义标签指导在多模态情感分析中的作用提供了新的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal sentiment analysis based on label semantic guidance under social links
The proliferation of social media platforms has led to an explosion of multimodal data that encapsulates rich emotional content. Effectively integrating heterogeneous modalities to predict sentiment polarity remains a critical challenge. Existing approaches often underexploit sentiment prior knowledge and largely ignore the impact of social links on emotional trends, resulting in suboptimal performance. To address these limitations, we propose a novel multimodal sentiment analysis framework, i.e., Label Semantic Guidance under Social Links (LSGSL). LSGSL enhances sentiment reasoning by jointly modeling visual-textual features and the social relationships between users. Specifically, it encodes social links as a graph structure to facilitate sentiment-aware interactions across modalities, and introduces a novel use of sentiment labels-not merely as classification targets, but as semantic embeddings that guide the fusion and reasoning processes. Furthermore, LSGSL adopts a multi-task learning paradigm that jointly optimizes three objectives: image-text contrastive loss, sentiment-guided semantic similarity loss, and sentiment polarity classification loss. Extensive experiments on three widely-used benchmark datasets demonstrate that LSGSL consistently outperforms state-of-the-art methods, offering new insights into the role of social context and semantic label guidance in multimodal sentiment analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信