A modal fusion network with dual attention mechanism for 6D pose estimation

Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang
{"title":"A modal fusion network with dual attention mechanism for 6D pose estimation","authors":"Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang","doi":"10.1007/s00371-024-03614-w","DOIUrl":null,"url":null,"abstract":"<p>The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03614-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.

Abstract Image

具有双重关注机制的模态融合网络用于 6D 姿态估计
基于 RGB-D 数据的 6D 姿态估计在计算机视觉及相关领域具有重要的应用价值。目前,深度学习方法通常采用卷积网络进行特征提取,这种方法对近距离的关键点敏感,但忽略了远距离关键点的相关信息。此外,在后续阶段,无法有效融合空间特征(深度通道特征)和色彩纹理特征(RGB 通道特征)。因此,这一局限性导致现有基于 RGB-D 数据的 6D 姿态网络的准确性大打折扣。为解决这一问题,我们提出了一种新型端到端 6D 姿态估计网络。在深度数据处理网络分支中,利用掩膜向量的注意力机制建立全局空间权重,实现深度特征的鲁棒提取。在特征融合阶段,引入了对称融合模块。在该模块中,空间特征和颜色纹理特征通过交叉注意机制进行自相关融合。在 LINEMOD 和 LINEMOD-OCLUSION 数据集上进行了实验评估,我们的方法的 ADD(-S) 分数分别达到了 95.84% 和 47.89%。与最先进的方法相比,我们的方法在对形状复杂的物体进行姿态估计时表现出更优越的性能。此外,在存在遮挡的情况下,我们的方法对不对称物体的姿态估计精度也得到了有效提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信