Recurrent Visual Relationship Recognition with Triplet Unit for Diversity

Kento Masui, A. Ochiai, Shintaro Yoshizawa, Hideki Nakayama
{"title":"Recurrent Visual Relationship Recognition with Triplet Unit for Diversity","authors":"Kento Masui, A. Ochiai, Shintaro Yoshizawa, Hideki Nakayama","doi":"10.1142/S1793351X18400214","DOIUrl":null,"url":null,"abstract":"The task of visual relationship recognition (VRR) is to recognize multiple objects and their relationships in an image. A fundamental difficulty of this task is class–number scalability, since the number of possible relationships we need to consider causes combinatorial explosion. Another difficulty of this task is modeling how to avoid outputting semantically redundant relationships. To overcome these challenges, this paper proposes a novel architecture with a recurrent neural network (RNN) and triplet unit (TU). The RNN allows our model to be optimized for outputting a sequence of relationships. By optimizing our model to a semantically diverse relationship sequence, we increase the variety in output relationships. At each step of the RNN, our TU enables the model to classify a relationship while achieving class–number scalability by decomposing a relationship into a subject–predicate–object (SPO) triplet. We evaluate our model on various datasets and compare the results to a baseline. These experimental results show our model’s superior recall and precision with fewer predictions compared to the baseline, even as it produces greater variety in relationships.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Semantic Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S1793351X18400214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The task of visual relationship recognition (VRR) is to recognize multiple objects and their relationships in an image. A fundamental difficulty of this task is class–number scalability, since the number of possible relationships we need to consider causes combinatorial explosion. Another difficulty of this task is modeling how to avoid outputting semantically redundant relationships. To overcome these challenges, this paper proposes a novel architecture with a recurrent neural network (RNN) and triplet unit (TU). The RNN allows our model to be optimized for outputting a sequence of relationships. By optimizing our model to a semantically diverse relationship sequence, we increase the variety in output relationships. At each step of the RNN, our TU enables the model to classify a relationship while achieving class–number scalability by decomposing a relationship into a subject–predicate–object (SPO) triplet. We evaluate our model on various datasets and compare the results to a baseline. These experimental results show our model’s superior recall and precision with fewer predictions compared to the baseline, even as it produces greater variety in relationships.
基于多元三元单元的循环视觉关系识别
视觉关系识别(VRR)的任务是识别图像中的多个对象及其关系。这个任务的一个基本困难是类-数的可伸缩性,因为我们需要考虑的可能关系的数量会导致组合爆炸。该任务的另一个难点是如何避免输出语义冗余关系的建模。为了克服这些挑战,本文提出了一种新的递归神经网络(RNN)和三重单元(TU)结构。RNN允许我们的模型进行优化,以输出一系列关系。通过将我们的模型优化为语义多样化的关系序列,我们增加了输出关系的多样性。在RNN的每个步骤中,我们的TU使模型能够对关系进行分类,同时通过将关系分解为主题-谓词-对象(SPO)三元组来实现类数可扩展性。我们在不同的数据集上评估我们的模型,并将结果与基线进行比较。这些实验结果表明,与基线相比,我们的模型在预测更少的情况下具有更高的召回率和精确度,即使它在关系中产生了更多的变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信