基于专家融合和关系关注的多模态知识图嵌入框架

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ban Tran , Thanh Le
{"title":"基于专家融合和关系关注的多模态知识图嵌入框架","authors":"Ban Tran ,&nbsp;Thanh Le","doi":"10.1016/j.knosys.2025.113541","DOIUrl":null,"url":null,"abstract":"<div><div>Knowledge graph embedding is essential for knowledge graph completion and downstream applications. However, in multimodal knowledge graphs, this task is particularly challenging due to incomplete and noisy multimodal data, which often fails to capture semantic relationships between entities. While existing methods attempt to integrate multimodal features, they frequently overlook relational semantics and cross-modal dependencies, leading to suboptimal entity representations. To address these limitations, we propose MESN, a novel multimodal embedding framework that integrates relational and multimodal signals through semantic aggregation and neighbor-aware attention mechanisms. MESN selectively extracts informative multimodal features via adaptive attention and expert-driven learning, ensuring more expressive entity embeddings. Additionally, we introduce an enhanced ComplEx-based scoring function, which effectively combines structured graph interactions with multimodal information, capturing both relational and feature diversity. Extensive experiments on standard multimodal datasets confirm that MESN significantly outperforms baselines across multiple evaluation metrics. Our findings highlight the importance of relational guidance in multimodal embedding tasks, paving the way for more robust and semantically-aware knowledge representations.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"318 ","pages":"Article 113541"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MESN: A multimodal knowledge graph embedding framework with expert fusion and relational attention\",\"authors\":\"Ban Tran ,&nbsp;Thanh Le\",\"doi\":\"10.1016/j.knosys.2025.113541\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Knowledge graph embedding is essential for knowledge graph completion and downstream applications. However, in multimodal knowledge graphs, this task is particularly challenging due to incomplete and noisy multimodal data, which often fails to capture semantic relationships between entities. While existing methods attempt to integrate multimodal features, they frequently overlook relational semantics and cross-modal dependencies, leading to suboptimal entity representations. To address these limitations, we propose MESN, a novel multimodal embedding framework that integrates relational and multimodal signals through semantic aggregation and neighbor-aware attention mechanisms. MESN selectively extracts informative multimodal features via adaptive attention and expert-driven learning, ensuring more expressive entity embeddings. Additionally, we introduce an enhanced ComplEx-based scoring function, which effectively combines structured graph interactions with multimodal information, capturing both relational and feature diversity. Extensive experiments on standard multimodal datasets confirm that MESN significantly outperforms baselines across multiple evaluation metrics. Our findings highlight the importance of relational guidance in multimodal embedding tasks, paving the way for more robust and semantically-aware knowledge representations.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"318 \",\"pages\":\"Article 113541\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125005878\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125005878","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

知识图嵌入是知识图补全和后续应用的基础。然而,在多模态知识图中,由于不完整和有噪声的多模态数据,通常无法捕获实体之间的语义关系,因此这项任务尤其具有挑战性。虽然现有的方法试图集成多模态特征,但它们经常忽略关系语义和跨模态依赖关系,从而导致次优实体表示。为了解决这些限制,我们提出了一种新的多模态嵌入框架,它通过语义聚合和邻居感知注意机制集成了关系和多模态信号。MESN通过自适应关注和专家驱动的学习,选择性地提取信息丰富的多模态特征,确保更具表现力的实体嵌入。此外,我们引入了一个增强的基于复杂的评分函数,该函数有效地将结构化图交互与多模态信息结合起来,捕获关系和特征多样性。在标准多模态数据集上进行的大量实验证实,MESN在多个评估指标上的表现明显优于基线。我们的研究结果强调了关系指导在多模态嵌入任务中的重要性,为更健壮和语义感知的知识表示铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MESN: A multimodal knowledge graph embedding framework with expert fusion and relational attention
Knowledge graph embedding is essential for knowledge graph completion and downstream applications. However, in multimodal knowledge graphs, this task is particularly challenging due to incomplete and noisy multimodal data, which often fails to capture semantic relationships between entities. While existing methods attempt to integrate multimodal features, they frequently overlook relational semantics and cross-modal dependencies, leading to suboptimal entity representations. To address these limitations, we propose MESN, a novel multimodal embedding framework that integrates relational and multimodal signals through semantic aggregation and neighbor-aware attention mechanisms. MESN selectively extracts informative multimodal features via adaptive attention and expert-driven learning, ensuring more expressive entity embeddings. Additionally, we introduce an enhanced ComplEx-based scoring function, which effectively combines structured graph interactions with multimodal information, capturing both relational and feature diversity. Extensive experiments on standard multimodal datasets confirm that MESN significantly outperforms baselines across multiple evaluation metrics. Our findings highlight the importance of relational guidance in multimodal embedding tasks, paving the way for more robust and semantically-aware knowledge representations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信