Heng Zhang , Zhihua Wei , Guanming Liu , Rui Wang , Ruibin Mu , Chuanbao Liu , Aiquan Yuan , Guodong Cao , Ning Hu
{"title":"MKEAH: Multimodal knowledge extraction and accumulation based on hyperplane embedding for knowledge-based visual question answering","authors":"Heng Zhang , Zhihua Wei , Guanming Liu , Rui Wang , Ruibin Mu , Chuanbao Liu , Aiquan Yuan , Guodong Cao , Ning Hu","doi":"10.1016/j.vrih.2023.06.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>External knowledge representations play an essential role in knowledge-based visual question and answering to better understand complex scenarios in the open world. Recent entity-relationship embedding approaches are deficient in representing some complex relations, resulting in a lack of topic-related knowledge and redundancy in topic-irrelevant information.</p></div><div><h3>Methods</h3><p>To this end, we propose MKEAH: Multimodal Knowledge Extraction and Accumulation on Hyperplanes. To ensure that the lengths of the feature vectors projected onto the hyperplane compare equally and to filter out sufficient topic-irrelevant information, two losses are proposed to learn the triplet representations from the complementary views: range loss and orthogonal loss. To interpret the capability of extracting topic-related knowledge, we present the Topic Similarity (TS) between topic and entity-relations.</p></div><div><h3>Results</h3><p>Experimental results demonstrate the effectiveness of hyperplane embedding for knowledge representation in knowledge-based visual question answering. Our model outperformed state-of-the-art methods by 2.12% and 3.24% on two challenging knowledge-request datasets: OK-VQA and KRVQA, respectively.</p></div><div><h3>Conclusions</h3><p>The obvious advantages of our model in TS show that using hyperplane embedding to represent multimodal knowledge can improve its ability to extract topic-related knowledge.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000268/pdfft?md5=74ea90656cf281de7a0e35aa5b55705b&pid=1-s2.0-S2096579623000268-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579623000268","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
Background
External knowledge representations play an essential role in knowledge-based visual question and answering to better understand complex scenarios in the open world. Recent entity-relationship embedding approaches are deficient in representing some complex relations, resulting in a lack of topic-related knowledge and redundancy in topic-irrelevant information.
Methods
To this end, we propose MKEAH: Multimodal Knowledge Extraction and Accumulation on Hyperplanes. To ensure that the lengths of the feature vectors projected onto the hyperplane compare equally and to filter out sufficient topic-irrelevant information, two losses are proposed to learn the triplet representations from the complementary views: range loss and orthogonal loss. To interpret the capability of extracting topic-related knowledge, we present the Topic Similarity (TS) between topic and entity-relations.
Results
Experimental results demonstrate the effectiveness of hyperplane embedding for knowledge representation in knowledge-based visual question answering. Our model outperformed state-of-the-art methods by 2.12% and 3.24% on two challenging knowledge-request datasets: OK-VQA and KRVQA, respectively.
Conclusions
The obvious advantages of our model in TS show that using hyperplane embedding to represent multimodal knowledge can improve its ability to extract topic-related knowledge.