An Efficient Color and Geometric Feature Fusion Module for 6D Object Pose Estiamtion

IF 1.5 Q3 AUTOMATION & CONTROL SYSTEMS
Jiangeng Li, Hong Liu, Gao Huang, Guoyu Zuo
{"title":"An Efficient Color and Geometric Feature Fusion Module for 6D Object Pose Estiamtion","authors":"Jiangeng Li, Hong Liu, Gao Huang, Guoyu Zuo","doi":"10.1109/CYBER55403.2022.9907032","DOIUrl":null,"url":null,"abstract":"6D pose estimation is widely used in robot tasks such as sorting and grasping. RGB-D-based methods have recently attained brilliant success, but they are still susceptible to heavy occlusion. Our critical insight is that color and geometry information in RGBD images are two complementary data, and the crux of the pose estimation problem under occlusion is fully leveraging them. Towards this end, we propose a new color and geometry feature fusion module that can efficiently leverage two complementary data sources from RGB-D images. Unlike prior fusion methods, we conduct a two-stage fusion strategy to do color-depth fusion and local-global fusion successively. Specifically, we fuse the color features extracted from RGB images into the point cloud in the first stage. In the second stage, we extract local and global features from the fused point cloud using an ASSANet-like network and splice them together to obtain the final fusion features. We conducted experiments on the widely used LineMod and YCB-Video datasets, which shows that our method improves the prediction accuracy while reducing the training time.","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBER55403.2022.9907032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

6D pose estimation is widely used in robot tasks such as sorting and grasping. RGB-D-based methods have recently attained brilliant success, but they are still susceptible to heavy occlusion. Our critical insight is that color and geometry information in RGBD images are two complementary data, and the crux of the pose estimation problem under occlusion is fully leveraging them. Towards this end, we propose a new color and geometry feature fusion module that can efficiently leverage two complementary data sources from RGB-D images. Unlike prior fusion methods, we conduct a two-stage fusion strategy to do color-depth fusion and local-global fusion successively. Specifically, we fuse the color features extracted from RGB images into the point cloud in the first stage. In the second stage, we extract local and global features from the fused point cloud using an ASSANet-like network and splice them together to obtain the final fusion features. We conducted experiments on the widely used LineMod and YCB-Video datasets, which shows that our method improves the prediction accuracy while reducing the training time.
一种有效的6D物体姿态估计颜色和几何特征融合模块
6D姿态估计广泛应用于机器人的分类、抓取等任务中。基于rgb - d的方法最近取得了辉煌的成功,但它们仍然容易受到严重遮挡的影响。我们的关键见解是RGBD图像中的颜色和几何信息是两个互补的数据,遮挡下姿态估计问题的关键是充分利用它们。为此,我们提出了一种新的颜色和几何特征融合模块,可以有效地利用来自RGB-D图像的两个互补数据源。与以往的融合方法不同,我们采用了两阶段融合策略,分别进行颜色深度融合和局部-全局融合。具体来说,我们在第一阶段将从RGB图像中提取的颜色特征融合到点云中。在第二阶段,我们使用类似assanet的网络从融合点云中提取局部和全局特征,并将它们拼接在一起,得到最终的融合特征。我们在广泛使用的LineMod和YCB-Video数据集上进行了实验,结果表明我们的方法在减少训练时间的同时提高了预测精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Cybersystems and Robotics
IET Cybersystems and Robotics Computer Science-Information Systems
CiteScore
3.70
自引率
0.00%
发文量
31
审稿时长
34 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信