Multimodal Knowledge Triple Extraction Based on Representation Learning

Guang-ming Xian, Wencong Zhang, Fucai Lan, Yifan Lin, Yanhang Lin
{"title":"Multimodal Knowledge Triple Extraction Based on Representation Learning","authors":"Guang-ming Xian, Wencong Zhang, Fucai Lan, Yifan Lin, Yanhang Lin","doi":"10.1109/EEI59236.2023.10212829","DOIUrl":null,"url":null,"abstract":"Knowledge based visual question-answering is an emerging technique that combines computer vision and natural language processing to address image-based questions. This approach requires the model to possess internal reasoning ability and incorporate external knowledge to enhance its generalization performance. Knowledge graphs are commonly employed to integrate various types of knowledge, such as image knowledge, text knowledge, and basic common sense, into visual language question answering models, significantly enhancing their interpretability. However, introducing knowledge into these models often involves retrieval and pre-training methods, which can introduce noise due to the local correlation among knowledge sources and the existence of modality gaps, thereby affecting the model's performance. To mitigate this issue, this paper proposes a multimodal knowledge extraction approach based on distributed representation learning. The approach models inexpressible multimodal facts using explicit triples, considering the semantic gap between visual and textual modalities. Cross-modal representations are obtained through an attention mechanism, resulting in knowledge triplets comprising visual features, cross-modal representations, and text features. By incorporating these knowledge triplets into the visual language question answering model, the task is transformed into pattern matching using knowledge triples. The proposed approach comprehensively considers multiple factors, is not restricted to specific forms of knowledge, and can effectively incorporate a substantial amount of knowledge. Experimental results demonstrate its superiority, with a performance improvement of 0.5% on the OKVQA dataset compared to the baseline model, highlighting its strong generalization ability.","PeriodicalId":363603,"journal":{"name":"2023 5th International Conference on Electronic Engineering and Informatics (EEI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 5th International Conference on Electronic Engineering and Informatics (EEI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EEI59236.2023.10212829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Knowledge based visual question-answering is an emerging technique that combines computer vision and natural language processing to address image-based questions. This approach requires the model to possess internal reasoning ability and incorporate external knowledge to enhance its generalization performance. Knowledge graphs are commonly employed to integrate various types of knowledge, such as image knowledge, text knowledge, and basic common sense, into visual language question answering models, significantly enhancing their interpretability. However, introducing knowledge into these models often involves retrieval and pre-training methods, which can introduce noise due to the local correlation among knowledge sources and the existence of modality gaps, thereby affecting the model's performance. To mitigate this issue, this paper proposes a multimodal knowledge extraction approach based on distributed representation learning. The approach models inexpressible multimodal facts using explicit triples, considering the semantic gap between visual and textual modalities. Cross-modal representations are obtained through an attention mechanism, resulting in knowledge triplets comprising visual features, cross-modal representations, and text features. By incorporating these knowledge triplets into the visual language question answering model, the task is transformed into pattern matching using knowledge triples. The proposed approach comprehensively considers multiple factors, is not restricted to specific forms of knowledge, and can effectively incorporate a substantial amount of knowledge. Experimental results demonstrate its superiority, with a performance improvement of 0.5% on the OKVQA dataset compared to the baseline model, highlighting its strong generalization ability.
基于表示学习的多模态知识三重抽取
基于知识的视觉问答是一种结合计算机视觉和自然语言处理来解决基于图像的问题的新兴技术。这种方法要求模型具有内部推理能力,并结合外部知识来提高模型的泛化性能。知识图通常用于将各种类型的知识,如图像知识、文本知识、基本常识等整合到视觉语言问答模型中,显著提高了模型的可解释性。然而,在这些模型中引入知识往往涉及到检索和预训练方法,由于知识来源之间的局部相关性和模态间隙的存在,这些方法会引入噪声,从而影响模型的性能。为了解决这一问题,本文提出了一种基于分布式表示学习的多模态知识提取方法。该方法使用显式三元组对难以表达的多模态事实建模,并考虑到视觉模态和文本模态之间的语义差距。跨模态表征是通过注意机制获得的,从而形成由视觉特征、跨模态表征和文本特征组成的知识三元组。通过将这些知识三元组整合到视觉语言问答模型中,将任务转化为使用知识三元组的模式匹配。该方法综合考虑了多种因素,不局限于特定形式的知识,可以有效地整合大量的知识。实验结果证明了该方法的优越性,在OKVQA数据集上的性能比基线模型提高了0.5%,突出了其较强的泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信