Multimodal Biological Knowledge Graph Completion via Triple Co-Attention Mechanism

Derong Xu, Jingbo Zhou, Tong Xu, Yuan Xia, Ji Liu, Enhong Chen, D. Dou
{"title":"Multimodal Biological Knowledge Graph Completion via Triple Co-Attention Mechanism","authors":"Derong Xu, Jingbo Zhou, Tong Xu, Yuan Xia, Ji Liu, Enhong Chen, D. Dou","doi":"10.1109/ICDE55515.2023.10231041","DOIUrl":null,"url":null,"abstract":"Biological Knowledge Graphs (BKGs) can help to model complex biological systems in a structural way to support various tasks. Nevertheless, the incompleteness problem may limit the performance of existing BKGs, which still deserves new methods to reveal the missing relations. Though great efforts have been made to knowledge graph completion, existing methods are not easy to be adapted to the multimodal biological information such as molecular structures and textual descriptions. To this end, we propose a novel co-attention-based multimodal embedding framework, named CamE, for the multimodal BKG completion task. Specifically, we design a Triple Co-Attention (TCA) operator to capture and highlight the same semantic features among different modalities. Based on TCA, we further propose two components to handle multimodal fusion and multimodal entity-relation interaction, respectively. One is the multimodal TCA fusion module to achieve a multimodal joint representation for each entity in the BKG. It aims to project different modal information into a common space by capturing the same semantic features and overcoming the modality gap. The other is the relation-aware interactive TCA module to learn interactive representation by modelling the deep interaction between multimodal entities and relations. Extensive experiments on two real-world multimodal BKG datasets demonstrate that our method significantly outperforms several state-of-the-art baselines, including 10.3% and 16.2% improvement w.r.t MRR and Hits@1 metrics over its best competitors on public DRKG-MM dataset.","PeriodicalId":434744,"journal":{"name":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDE55515.2023.10231041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Biological Knowledge Graphs (BKGs) can help to model complex biological systems in a structural way to support various tasks. Nevertheless, the incompleteness problem may limit the performance of existing BKGs, which still deserves new methods to reveal the missing relations. Though great efforts have been made to knowledge graph completion, existing methods are not easy to be adapted to the multimodal biological information such as molecular structures and textual descriptions. To this end, we propose a novel co-attention-based multimodal embedding framework, named CamE, for the multimodal BKG completion task. Specifically, we design a Triple Co-Attention (TCA) operator to capture and highlight the same semantic features among different modalities. Based on TCA, we further propose two components to handle multimodal fusion and multimodal entity-relation interaction, respectively. One is the multimodal TCA fusion module to achieve a multimodal joint representation for each entity in the BKG. It aims to project different modal information into a common space by capturing the same semantic features and overcoming the modality gap. The other is the relation-aware interactive TCA module to learn interactive representation by modelling the deep interaction between multimodal entities and relations. Extensive experiments on two real-world multimodal BKG datasets demonstrate that our method significantly outperforms several state-of-the-art baselines, including 10.3% and 16.2% improvement w.r.t MRR and Hits@1 metrics over its best competitors on public DRKG-MM dataset.
通过三重共同注意机制完成多模态生物知识图谱
生物知识图(Biological Knowledge Graphs, BKGs)可以帮助以结构化的方式对复杂的生物系统进行建模,以支持各种任务。然而,不完备性问题可能会限制现有BKGs的性能,这仍然需要新的方法来揭示缺失的关系。虽然在知识图谱补全方面已经做出了很大的努力,但现有的方法不容易适应分子结构和文本描述等多模态生物信息。为此,我们提出了一个新的基于协同注意力的多模态嵌入框架,命名为CamE,用于多模态BKG完成任务。具体来说,我们设计了一个三重共同关注(TCA)算子来捕获和突出不同模态之间的相同语义特征。在此基础上,我们进一步提出了两个组件分别处理多模态融合和多模态实体-关系交互。一个是多模态TCA融合模块,用于实现BKG中每个实体的多模态联合表示。它的目的是通过捕捉相同的语义特征,克服情态差距,将不同的情态信息投射到一个共同的空间中。另一个是关系感知交互式TCA模块,通过建模多模态实体和关系之间的深度交互来学习交互式表示。在两个真实世界的多模态BKG数据集上进行的大量实验表明,我们的方法显著优于几个最先进的基线,包括在公共DRKG-MM数据集上比其最佳竞争对手提高10.3%和16.2%的w.r.t MRR和Hits@1指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信