对象堆叠场景中基于图的视觉操作关系推理

Guoyu Zuo, Jiayuan Tong, Hongxing Liu, Wenbai Chen, Jianfeng Li
{"title":"对象堆叠场景中基于图的视觉操作关系推理","authors":"Guoyu Zuo, Jiayuan Tong, Hongxing Liu, Wenbai Chen, Jianfeng Li","doi":"10.1109/IJCNN52387.2021.9534389","DOIUrl":null,"url":null,"abstract":"In object-stacking scenes, robotic manipulation is one of the most important research topics in robotics. It is particularly significant to reason object relationships and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. However, many existing methods focus on individual object features and ignore contextual information, leading to great challenges in efficiently reasoning manipulation relationship. In this paper, we introduce a novel graph-based visual manipulation relationship reasoning architecture that directly outputs object relationships and manipulation order. Our model first extracts features and detects objects from RGB images, and then adopts Graph Convolutional Network (GCN) to collect contextual information between objects. Moreover, a relationship filtering network is built to reduce object pairs before reasoning and improve the efficiency of relation reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in obiect-stackina scenes.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Graph-based Visual Manipulation Relationship Reasoning in Object-Stacking Scenes\",\"authors\":\"Guoyu Zuo, Jiayuan Tong, Hongxing Liu, Wenbai Chen, Jianfeng Li\",\"doi\":\"10.1109/IJCNN52387.2021.9534389\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In object-stacking scenes, robotic manipulation is one of the most important research topics in robotics. It is particularly significant to reason object relationships and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. However, many existing methods focus on individual object features and ignore contextual information, leading to great challenges in efficiently reasoning manipulation relationship. In this paper, we introduce a novel graph-based visual manipulation relationship reasoning architecture that directly outputs object relationships and manipulation order. Our model first extracts features and detects objects from RGB images, and then adopts Graph Convolutional Network (GCN) to collect contextual information between objects. Moreover, a relationship filtering network is built to reduce object pairs before reasoning and improve the efficiency of relation reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in obiect-stackina scenes.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9534389\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9534389","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在物体堆叠场景中,机器人操作是机器人技术的重要研究课题之一。对于机器人与环境进行更高级的交互,推理对象关系和获得智能操作顺序尤为重要。然而,现有的许多方法都侧重于单个对象的特征,而忽略了上下文信息,这给有效推理操作关系带来了很大的挑战。本文提出了一种新的基于图形的可视化操作关系推理体系结构,该体系结构可以直接输出对象关系和操作顺序。我们的模型首先从RGB图像中提取特征并检测目标,然后采用图卷积网络(GCN)收集目标之间的上下文信息。建立关系过滤网络,减少推理前的对象对,提高推理效率。在视觉操作关系数据集(VMRD)上的实验表明,我们的模型在对象堆叠场景中的对象关系推理方面明显优于先前的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Graph-based Visual Manipulation Relationship Reasoning in Object-Stacking Scenes
In object-stacking scenes, robotic manipulation is one of the most important research topics in robotics. It is particularly significant to reason object relationships and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. However, many existing methods focus on individual object features and ignore contextual information, leading to great challenges in efficiently reasoning manipulation relationship. In this paper, we introduce a novel graph-based visual manipulation relationship reasoning architecture that directly outputs object relationships and manipulation order. Our model first extracts features and detects objects from RGB images, and then adopts Graph Convolutional Network (GCN) to collect contextual information between objects. Moreover, a relationship filtering network is built to reduce object pairs before reasoning and improve the efficiency of relation reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in obiect-stackina scenes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信