基于RGB-D自适应后期融合的稳健6D姿态估计

Théo Petitjean, Zongwei Wu, C. Demonceaux, O. Laligant
{"title":"基于RGB-D自适应后期融合的稳健6D姿态估计","authors":"Théo Petitjean, Zongwei Wu, C. Demonceaux, O. Laligant","doi":"10.1117/12.2690943","DOIUrl":null,"url":null,"abstract":"RGB-D 6D pose estimation has recently gained significant research attention due to the complementary information provided by depth data. However, in real-world scenarios, especially in industrial applications, the depth and color images are often more noisy1 . 2 Existing methods typically employ fusion designs that equally average RGB and depth features, which may not be optimal. In this paper, we propose a novel fusion design that adaptively merges RGB-D cues. Our approach involves assigning two learnable weight α1 and α2 to adjust the RGB and depth contributions with respect to the network depth. This enables us to improve the robustness against low-quality depth input in a simple yet effective manner. We conducted extensive experiments on the 6D pose estimation benchmark and demonstrated the effectiveness of our method. We evaluated our network in conjunction with DenseFusion on two datasets (LineMod3 and YCB4) using similar noise scenarios to verify the usefulness of reinforcing the fusion with the α1 and α2 parameters. Our experiments show that our method outperforms existing methods, particularly in low-quality depth input scenarios. We plan to make our source code publicly available for future research.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12749 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OLF: RGB-D adaptive late fusion for robust 6D pose estimation\",\"authors\":\"Théo Petitjean, Zongwei Wu, C. Demonceaux, O. Laligant\",\"doi\":\"10.1117/12.2690943\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-D 6D pose estimation has recently gained significant research attention due to the complementary information provided by depth data. However, in real-world scenarios, especially in industrial applications, the depth and color images are often more noisy1 . 2 Existing methods typically employ fusion designs that equally average RGB and depth features, which may not be optimal. In this paper, we propose a novel fusion design that adaptively merges RGB-D cues. Our approach involves assigning two learnable weight α1 and α2 to adjust the RGB and depth contributions with respect to the network depth. This enables us to improve the robustness against low-quality depth input in a simple yet effective manner. We conducted extensive experiments on the 6D pose estimation benchmark and demonstrated the effectiveness of our method. We evaluated our network in conjunction with DenseFusion on two datasets (LineMod3 and YCB4) using similar noise scenarios to verify the usefulness of reinforcing the fusion with the α1 and α2 parameters. Our experiments show that our method outperforms existing methods, particularly in low-quality depth input scenarios. We plan to make our source code publicly available for future research.\",\"PeriodicalId\":295011,\"journal\":{\"name\":\"International Conference on Quality Control by Artificial Vision\",\"volume\":\"12749 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Quality Control by Artificial Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2690943\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Quality Control by Artificial Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2690943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于深度数据提供的补充信息,rgb - d6d位姿估计近年来得到了很大的研究关注。然而,在现实场景中,特别是在工业应用中,深度和颜色图像通常更嘈杂。现有方法通常采用平均RGB和深度特征的融合设计,这可能不是最优的。在本文中,我们提出了一种新的融合设计,自适应地融合RGB-D线索。我们的方法包括分配两个可学习的权重α1和α2来调整相对于网络深度的RGB和深度贡献。这使我们能够以一种简单而有效的方式提高对低质量深度输入的鲁棒性。我们在6D姿态估计基准上进行了大量的实验,并证明了我们的方法的有效性。我们在两个数据集(LineMod3和YCB4)上使用相似的噪声场景评估了我们的网络和DenseFusion,以验证用α1和α2参数加强融合的有效性。我们的实验表明,我们的方法优于现有的方法,特别是在低质量深度输入场景下。我们计划公开我们的源代码,以供将来的研究使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
OLF: RGB-D adaptive late fusion for robust 6D pose estimation
RGB-D 6D pose estimation has recently gained significant research attention due to the complementary information provided by depth data. However, in real-world scenarios, especially in industrial applications, the depth and color images are often more noisy1 . 2 Existing methods typically employ fusion designs that equally average RGB and depth features, which may not be optimal. In this paper, we propose a novel fusion design that adaptively merges RGB-D cues. Our approach involves assigning two learnable weight α1 and α2 to adjust the RGB and depth contributions with respect to the network depth. This enables us to improve the robustness against low-quality depth input in a simple yet effective manner. We conducted extensive experiments on the 6D pose estimation benchmark and demonstrated the effectiveness of our method. We evaluated our network in conjunction with DenseFusion on two datasets (LineMod3 and YCB4) using similar noise scenarios to verify the usefulness of reinforcing the fusion with the α1 and α2 parameters. Our experiments show that our method outperforms existing methods, particularly in low-quality depth input scenarios. We plan to make our source code publicly available for future research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信