Visual Question Answering over Scene Graph

Soohyeong Lee, Ju-Whan Kim, Youngmin Oh, Joo Hyuk Jeon
{"title":"Visual Question Answering over Scene Graph","authors":"Soohyeong Lee, Ju-Whan Kim, Youngmin Oh, Joo Hyuk Jeon","doi":"10.1109/GC46384.2019.00015","DOIUrl":null,"url":null,"abstract":"Visual question answering (VQA) is a task that takes an image and a related natural language question as input, and produces an answer as output. A successful VQA algorithm requires two key components: to obtain a structured representation of an image and to process a natural language question on the structured representation. While traditional VQA tasks work on raw images or image segmentation, recent VQA datasets such as CLEVR and GQA provide scene graphs that capture objects and their relationships expressed inside an image. However, even when the ground-truth scene graph is given, it is non-trivial to get the right answer to a natural language question, as it needs a sophisticated algorithm to process the scene graph and the question together. We propose to encode a scene graph and a question using Graph Network (GN). Then, we feed the encoded graph with the question to the Memory, Attention, and Composition (MAC) model to classify the answer. By including the question as a global vector in GN, we achieved the accuracy of 96.3% in GQA, surpassing 83.5% of the baseline method reported by the authors of GQA, which also used MAC to classify the answer. Our work suggests that a context-based encoding of the scene graph is crucial for graph-based reasoning tasks such as graph-related question answering.","PeriodicalId":129268,"journal":{"name":"2019 First International Conference on Graph Computing (GC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 First International Conference on Graph Computing (GC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GC46384.2019.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

Visual question answering (VQA) is a task that takes an image and a related natural language question as input, and produces an answer as output. A successful VQA algorithm requires two key components: to obtain a structured representation of an image and to process a natural language question on the structured representation. While traditional VQA tasks work on raw images or image segmentation, recent VQA datasets such as CLEVR and GQA provide scene graphs that capture objects and their relationships expressed inside an image. However, even when the ground-truth scene graph is given, it is non-trivial to get the right answer to a natural language question, as it needs a sophisticated algorithm to process the scene graph and the question together. We propose to encode a scene graph and a question using Graph Network (GN). Then, we feed the encoded graph with the question to the Memory, Attention, and Composition (MAC) model to classify the answer. By including the question as a global vector in GN, we achieved the accuracy of 96.3% in GQA, surpassing 83.5% of the baseline method reported by the authors of GQA, which also used MAC to classify the answer. Our work suggests that a context-based encoding of the scene graph is crucial for graph-based reasoning tasks such as graph-related question answering.
基于场景图的视觉问答
视觉问答(Visual question answer, VQA)是一种以图像和相关的自然语言问题作为输入,并产生答案作为输出的任务。一个成功的VQA算法需要两个关键组成部分:获得图像的结构化表示和在结构化表示上处理自然语言问题。虽然传统的VQA任务处理原始图像或图像分割,但最近的VQA数据集(如CLEVR和GQA)提供了捕获图像中对象及其关系的场景图。然而,即使给定了真实场景图,要得到自然语言问题的正确答案也不是简单的,因为它需要一个复杂的算法来同时处理场景图和问题。我们提出使用图网络(GN)对场景图和问题进行编码。然后,我们将带有问题的编码图提供给记忆、注意力和组合(MAC)模型来对答案进行分类。通过将问题作为GN中的全局向量,我们在GQA中实现了96.3%的准确率,超过了GQA作者报告的基线方法的83.5%,后者也使用MAC对答案进行分类。我们的工作表明,场景图的基于上下文的编码对于基于图的推理任务(如与图相关的问题回答)至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信